The risen prevalence of automated decision-making process is increasing the risk associated to models that can potentially discriminate against disadvantaged groups. Besides presenting a few recent fairness-aware algorithms, we also provide possible benchmarks to compare various existing fairness ranking algorithms. Moreover, the Fairness Measures Project contributes to the development of fairness-aware algorithms and systems by providing relevant datasets and software.
You may find a series of datasets we have collected and/or prepared. These datasets are from various fields and applications (e.g., finance, law, and human resources). Moreover, we provide common fairness definitions in machine learning. Code implementing a series of measures introduced in literature to analyse and quantify discrimination can also be found on this website. We include common tests as well as more specialized methods.
We would love to hear your comments and suggestions, please contact Meike Zehlike for any feedback you may have.