What does it actually mean for an algorithm to be fair? Different researchers have used different notions of algorithmic fairness. We provide here three different ways of classifying fairness.
It is also refered to as statistical parity. It is a requirement that the protected groups should be treated similarly to the advantaged group or the populations as a whole.
It is a requirement that individuals should be treated consistently.
Group fairness does not consider the individual merits and may result in choosing the less qualified members of a group, whereas individual fairness assumes a similarity metric of the individuals for the classification task at hand that is generally hard to find.
This appears when different users receive different content based on user attributes that should be protected, such as gender, race, ethnicity, or religion.
It refers to biases in the information received by any user. Take for example, when some aspect is disproportionately represented in a query result or in news feeds.
This consists of rules or procedures that explicitly mention minority or disadvantaged groups based on sensitive discriminatory attributes related to group membership.
This consists of rules or procedures that, while not explicitly mentioning discriminatory attributes, intentionally or unintentionally could generate discriminatory decisions. It exists due to the correlation of the non-discriminatory items with the discriminatory ones.
These definitions of fairness can be found in following literature:
Hajian, Sara et al. “Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining.” doi:10.1145/2939672.2945386 .bibtex