• Corpus ID: 88524010

Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions

@article{Mitchell2018PredictionBasedDA,
  title={Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions},
  author={Shira Mitchell and Eric Potash and Solon Barocas},
  journal={arXiv: Applications},
  year={2018}
}
A recent flurry of research activity has attempted to quantitatively define "fairness" for decisions based on statistical and machine learning (ML) predictions. The rapid growth of this new field has led to wildly inconsistent terminology and notation, presenting a serious challenge for cataloguing and comparing definitions. This paper attempts to bring much-needed order. First, we explicate the various choices and assumptions made---often implicitly---to justify the use of prediction-based… 

Figures and Tables from this paper

Fairness, Equality, and Power in Algorithmic Decision-Making
TLDR
This work argues that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by "merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences.
Fair Decisions Despite Imperfect Predictions
TLDR
The results suggest the need for a paradigm shift in the context of fair machine learning from the currently prevalent idea of simply building predictive models from a single static dataset via risk minimization, to a more interactive notion of "learning to decide".
Survey on Fairness Notions and Related Tensions
TLDR
The commonly used fairness notions are surveyed and the tensions that exist among them and with privacy and accuracy are discussed and the relationship between fairness measures and accuracy on real-world scenarios is illustrated.
Tracking and Improving Information in the Service of Fairness
TLDR
This work studies a formal framework for measuring the information content of predictors and shows that increasing information content through refinements improves the downstream selection rules across a wide range of fairness measures.
What Is Fairness? Implications For FairML
TLDR
It is derived that fairness problems can already arise without the presence of protected attributes, and it is shown that fairness and predictive performance are not irreconcilable counterparts, but rather that the latter is necessary to achieve the former.
What-Is and How-To for Fairness in Machine Learning: A Survey, Reflection, and Perspective
TLDR
The importance of matching the mission and the means of different types of fairness inquiries on the data generating process, on the predicted outcome, and on the induced impact, respectively is demonstrated.
Characterizing Fairness Over the Set of Good Models Under Selective Labels
TLDR
A framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or “the set of good models,” is developed, which addresses the empirically relevant challenge of selectively labelled data in the setting where the selection decision and outcome are unconfounded given the observed data features.
Distributive Justice and Fairness Metrics in Automated Decision-making: How Much Overlap Is There?
TLDR
It is argued that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
On the Applicability of ML Fairness Notions
TLDR
This paper is a survey of fairness notions that addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?".
Towards Supporting and Documenting Algorithmic Fairness in the Data Science Workflow
TLDR
A research agenda is outlined towards better visualizing difficult fairness-related tradeoffs between competing models, empirically quantifying societal norms about such tradeoffs, and documenting these decisions.
...
...

References

SHOWING 1-10 OF 229 REFERENCES
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
TLDR
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Avoiding Discrimination through Causal Reasoning
TLDR
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
On formalizing fairness in prediction with machine learning
TLDR
This article surveys how fairness is formalized in the machine learning literature for the task of prediction and presents these formalizations with their corresponding notions of distributive justice from the social sciences literature.
Fairness in Decision-Making - The Causal Explanation Formula
TLDR
The causal explanation formula is derived, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms, and provides a quantitative approach to policy implementation and the design of fair AI systems.
Counterfactual Fairness
TLDR
This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.
Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments
TLDR
The results suggest the need for a new "algorithm-in-the-loop" framework that places machine learning decision-making aids into the sociotechnical context of improving human decisions rather than the technical context of generating the best prediction in the abstract.
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
A comparative study of fairness-enhancing interventions in machine learning
TLDR
It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
Fairness and Abstraction in Sociotechnical Systems
TLDR
This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.
...
...