Certifying and Removing Disparate Impact
@article{Feldman2015CertifyingAR, title={Certifying and Removing Disparate Impact}, author={Michael Feldman and Sorelle A. Friedler and John Moeller and Carlos Eduardo Scheidegger and Suresh Venkatasubramanian}, journal={Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining}, year={2015} }
What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the…
Figures from this paper
1,182 Citations
An algorithm for removing sensitive information: Application to race-independent recidivism prediction
- Computer ScienceThe Annals of Applied Statistics
- 2019
This paper proposes a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained, and provides a probabilistic notion of algorithmic bias.
Encoding Fair Representations
- Computer Science
- 2019
A method for processing the data that removes the sensitive information that enables the discriminatory practices and modify the attributes through attribute generalization, which is an anonymization technique used to obscure values by grouping them.
Avoiding Disparate Impact with Counterfactual Distributions
- Computer Science
- 2018
This paper describes how counterfactual distributions can be used to avoid discrimination between protected groups by identifying proxy variables to omit in training and building a preprocessor that can mitigate discrimination.
Assessing algorithmic fairness with unobserved protected class using data combination
- Computer ScienceFAT*
- 2020
This paper studies a fundamental challenge to assessing disparate impacts, or performance disparities in general, in practice: protected class membership is often not observed in the data, particularly in lending and healthcare, and provides optimization-based algorithms for computing and visualizing sets of simultaneously achievable pairwise disparties for assessing disparities.
Evaluating Fairness Metrics in the Presence of Dataset Bias
- Computer ScienceArXiv
- 2018
A case study in which the issue of bias detection is framed as a causal inference problem with observational data, and a set of best practice guidelines to select the fairness metric that is most likely to detect bias if it is present is proposed.
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
- Computer ScienceICML
- 2019
This paper describes the perturbed distribution as a counterfactual distribution, and describes its properties for common fairness criteria, and discusses how the estimated distribution can be used to build a data preprocessor that can reduce disparate impact without training a new model.
Fairness Under Feature Exemptions: Counterfactual and Observational Measures
- Computer ScienceIEEE Transactions on Information Theory
- 2021
This work proposes a novel information-theoretic decomposition of the total bias into a non-exempt component that quantifies the part of the bias that cannot be accounted for by the critical features, and an exempt component which quantifying the remaining bias.
FlipTest: fairness testing via optimal transport
- Computer Science, EconomicsFAT*
- 2020
Evaluating the approach on three case studies, it is shown that this provides a computationally inexpensive way to identify subgroups that may be harmed by model discrimination, including in cases where the model satisfies group fairness criteria.
Avoiding Discrimination with Counterfactual Distributions
- Computer Science
- 2018
It is described how counterfactual distributions can be used to avoid discrimination between protected groups by identifying proxy variables to omit in training and building a preprocessor that can mitigate discrimination.
A statistical framework for fair predictive algorithms
- Computer ScienceArXiv
- 2016
A method to remove bias from predictive models by removing all information regarding protected variables from the permitted training data is proposed and is general enough to accommodate arbitrary data types, e.g. binary, continuous, etc.
References
SHOWING 1-10 OF 36 REFERENCES
Toward a Coherent Test for Disparate Impact Discrimination
- Law
- 2009
Statistics are generally plaintiffs’ primary evidence in establishing a prima facie case of disparate impact discrimination. Thus, the use, or misuse, of statistics dictates case outcomes. Lacking a…
Integrating induction and deduction for finding evidence of discrimination
- Computer ScienceArtificial Intelligence and Law
- 2010
An implementation, called LP2DD, of the overall reference model that integrates induction, through data mining classification rule extraction, and deduction, through a computational logic implementation of the analytical tools is presented.
Big Data's Disparate Impact
- Law
- 2016
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.…
Fairness-Aware Classifier with Prejudice Remover Regularizer
- Computer ScienceECML/PKDD
- 2012
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.
Fairness-aware Learning through Regularization Approach
- Computer Science2011 IEEE 11th International Conference on Data Mining Workshops
- 2011
This paper discusses three causes of unfairness in machine learning and proposes a regularization approach that is applicable to any prediction algorithm with probabilistic discriminative models and applies it to logistic regression to empirically show its effectiveness and efficiency.
Three naive Bayes approaches for discrimination-free classification
- Computer ScienceData Mining and Knowledge Discovery
- 2010
Three approaches for making the naive Bayes classifier discrimination-free are presented: modifying the probability of the decision being positive, training one model for every sensitive attribute value and balancing them, and adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization.
Fairness through awareness
- Computer ScienceITCS '12
- 2012
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Classifying without discriminating
- Computer Science2009 2nd International Conference on Computer, Control and Communication
- 2009
This paper proposes a new classification scheme for learning unbiased models on biased training data based on massaging the dataset by making the least intrusive modifications which lead to an unbiased dataset and learns a non-discriminating classifier.
A study of top-k measures for discrimination discovery
- Computer ScienceSAC '12
- 2012
To what extent the sets of top-k ranked rules with respect to any two pairs of measures agree is studied, including risk difference, risk ratio, odds ratio, and few others.
On the Statistical Consistency of Algorithms for Binary Classification under Class Imbalance
- Computer ScienceICML
- 2013
This paper studies consistency with respect to one performance measure, namely the arithmetic mean of the true positive and true negative rates (AM), and establishes that some practically popular approaches, such as applying an empirically determined threshold to a suitable class probability estimate or performing an empirical balanced form of risk minimization, are in fact consistent withrespect to the AM.