Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

  title={Fairness Beyond Disparate Treatment \& Disparate Impact: Learning Classification without Disparate Mistreatment},
  author={M. B. Zafar and Isabel Valera and M. Gomez-Rodriguez and K. Gummadi},
  journal={Proceedings of the 26th International Conference on World Wide Web},
Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to… Expand
Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
Loss-Aversively Fair Classification
Using Balancing Terms to Avoid Discrimination in Classification
  • Simon Enni, I. Assent
  • Computer Science
  • 2018 IEEE International Conference on Data Mining (ICDM)
  • 2018
Identifying Sources of Unfairness in Bayesian Logistic Regression
Fairness Constraints: A Flexible Approach for Fair Classification
Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges
  • S. Tolan
  • Computer Science, Mathematics
  • ArXiv
  • 2019


Fairness-Aware Classifier with Prejudice Remover Regularizer
Certifying and Removing Disparate Impact
Equality of Opportunity in Supervised Learning
Classification with No Discrimination by Preferential Sampling
I and J
Justin M
  • Rao. Precinct or Prejudice? Understanding Racial Disparities in New York City’s Stop-and-Frisk Policy. Annals of Applied Statistics
  • 2015