# Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions

@article{Mitchell2018PredictionBasedDA, title={Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions}, author={Shira Mitchell and Eric Potash and Solon Barocas}, journal={arXiv: Applications}, year={2018} }

A recent flurry of research activity has attempted to quantitatively define "fairness" for decisions based on statistical and machine learning (ML) predictions. The rapid growth of this new field has led to wildly inconsistent terminology and notation, presenting a serious challenge for cataloguing and comparing definitions. This paper attempts to bring much-needed order.
First, we explicate the various choices and assumptions made---often implicitly---to justify the use of prediction-based…

## 132 Citations

Fairness, Equality, and Power in Algorithmic Decision-Making

- Computer ScienceFAccT
- 2021

This work argues that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by "merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences.

Fair Decisions Despite Imperfect Predictions

- Computer ScienceAISTATS
- 2020

The results suggest the need for a paradigm shift in the context of fair machine learning from the currently prevalent idea of simply building predictive models from a single static dataset via risk minimization, to a more interactive notion of "learning to decide".

Survey on Fairness Notions and Related Tensions

- Computer Science
- 2022

The commonly used fairness notions are surveyed and the tensions that exist among them and with privacy and accuracy are discussed and the relationship between fairness measures and accuracy on real-world scenarios is illustrated.

Tracking and Improving Information in the Service of Fairness

- Computer ScienceEC
- 2019

This work studies a formal framework for measuring the information content of predictors and shows that increasing information content through refinements improves the downstream selection rules across a wide range of fairness measures.

What Is Fairness? Implications For FairML

- Computer ScienceArXiv
- 2022

It is derived that fairness problems can already arise without the presence of protected attributes, and it is shown that fairness and predictive performance are not irreconcilable counterparts, but rather that the latter is necessary to achieve the former.

What-Is and How-To for Fairness in Machine Learning: A Survey, Reflection, and Perspective

- Computer ScienceArXiv
- 2022

The importance of matching the mission and the means of different types of fairness inquiries on the data generating process, on the predicted outcome, and on the induced impact, respectively is demonstrated.

Characterizing Fairness Over the Set of Good Models Under Selective Labels

- Computer ScienceICML
- 2021

A framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or “the set of good models,” is developed, which addresses the empirically relevant challenge of selectively labelled data in the setting where the selection decision and outcome are unconfounded given the observed data features.

Distributive Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?

- Computer ScienceArXiv
- 2021

It is argued that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.

On the Applicability of ML Fairness Notions

- Computer ScienceArXiv
- 2020

This paper is a survey of fairness notions that addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?".

Towards Supporting and Documenting Algorithmic Fairness in the Data Science Workflow

- Computer Science
- 2019

A research agenda is outlined towards better visualizing difficult fairness-related tradeoffs between competing models, empirically quantifying societal norms about such tradeoffs, and documenting these decisions.

## References

SHOWING 1-10 OF 229 REFERENCES

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

- Computer ScienceArXiv
- 2018

It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.

Avoiding Discrimination through Causal Reasoning

- Computer ScienceNIPS
- 2017

This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

On formalizing fairness in prediction with machine learning

- Computer ScienceArXiv
- 2017

This article surveys how fairness is formalized in the machine learning literature for the task of prediction and presents these formalizations with their corresponding notions of distributive justice from the social sciences literature.

Fairness in Decision-Making - The Causal Explanation Formula

- Computer ScienceAAAI
- 2018

The causal explanation formula is derived, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms, and provides a quantitative approach to policy implementation and the design of fair AI systems.

Counterfactual Fairness

- Computer ScienceNIPS
- 2017

This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.

Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments

- PsychologyFAT
- 2019

The results suggest the need for a new "algorithm-in-the-loop" framework that places machine learning decision-making aids into the sociotechnical context of improving human decisions rather than the technical context of generating the best prediction in the abstract.

Fairness Constraints: Mechanisms for Fair Classification

- Computer ScienceAISTATS
- 2017

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

A comparative study of fairness-enhancing interventions in machine learning

- Computer ScienceFAT
- 2019

It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

- Computer ScienceICML
- 2018

It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Fairness and Abstraction in Sociotechnical Systems

- Computer ScienceFAT
- 2019

This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.