Corpus ID: 7567061

Equality of Opportunity in Supervised Learning

@inproceedings{Hardt2016EqualityOO,
  title={Equality of Opportunity in Supervised Learning},
  author={Moritz Hardt and Eric Price and Nathan Srebro},
  booktitle={NIPS},
  year={2016}
}
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. [...] Key Method Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv.Expand
Fairness in Supervised Learning: An Information Theoretic Approach
TLDR
This work presents an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting and uses equalized odds as the criterion for discrimination. Expand
Taking Advantage of Multitask Learning for Fair Classification
TLDR
This paper proposes to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups and proposes a three-pronged approach to tackle fairness, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing. Expand
A Distributionally Robust Approach to Fair Classification
TLDR
A distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity is proposed and it is demonstrated that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets. Expand
A Framework for Benchmarking Discrimination-Aware Models in Machine Learning
TLDR
Experimental results show that the quality of techniques can be assessed through known metrics of discrimination, and the flexible framework can be extended to most real datasets and fairness measures to support a diversity of assessments. Expand
On preserving non-discrimination when combining expert advice
We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We considerExpand
Fair Selective Classification Via Sufficiency
TLDR
It is proved that the sufficiency criterion can be used to mitigate disparities between groups by ensuring that selective classification increases performance on all groups, and introduced a method for mitigating the disparity in precision across the entire coverage scale based on this criterion. Expand
Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification
TLDR
It is shown that the fair optimal classifier is obtained by recalibrating the Bayes classifier by a group-dependent threshold and the overall procedure is shown to be statistically consistent both in terms of the classification error and fairness measure. Expand
Eliminating Latent Discrimination: Train Then Mask
TLDR
This paper defines a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics, and establishes analytical and algorithmic results about the existence of a fair classifier in the context of supervised learning. Expand
Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce Discrimination
TLDR
A framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data, a re-sampling method to obtain multiple fair datasets and lastly, ensemble learning to improve accuracy and decrease discrimination are presented. Expand
Wasserstein Fair Classification
TLDR
An approach to fair classification is proposed that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances and is robust to specific choices of the threshold used to obtain class predictions from model outputs. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 28 REFERENCES
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to theExpand
On the relation between accuracy and fairness in binary classification
TLDR
It is argued that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different ratesof positive predictions. Expand
Discrimination-aware data mining
TLDR
This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset. Expand
Learning Fair Classifiers
TLDR
This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines. Expand
Building Classifiers with Independency Constraints
TLDR
This paper studies the classification with independency constraints problem: find an accurate model for which the predictions are independent from a given binary attribute and proposes two solutions and presents an empirical validation. Expand
Inherent Trade-Offs in the Fair Determination of Risk Scores
TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided. Expand
The Variational Fair Autoencoder
TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations. Expand
Certifying and Removing Disparate Impact
TLDR
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes. Expand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Big Data's Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.Expand
...
1
2
3
...