• Corpus ID: 7567061

Equality of Opportunity in Supervised Learning

@article{Hardt2016EqualityOO,
  title={Equality of Opportunity in Supervised Learning},
  author={Moritz Hardt and Eric Price and Nathan Srebro},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.02413}
}
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. [] Key Method Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. We enourage readers to consult the more complete manuscript on the arXiv.

Figures from this paper

Fairness in Supervised Learning: An Information Theoretic Approach

This work presents an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting and uses equalized odds as the criterion for discrimination.

Taking Advantage of Multitask Learning for Fair Classification

This paper proposes to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups and proposes a three-pronged approach to tackle fairness, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing.

A Distributionally Robust Approach to Fair Classification

A distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity is proposed and it is demonstrated that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets.

A Framework for Benchmarking Discrimination-Aware Models in Machine Learning

Experimental results show that the quality of techniques can be assessed through known metrics of discrimination, and the flexible framework can be extended to most real datasets and fairness measures to support a diversity of assessments.

On preserving non-discrimination when combining expert advice

We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider

Fair Selective Classification Via Sufficiency

It is proved that the sufficiency criterion can be used to mitigate disparities between groups by ensuring that selective classification increases performance on all groups, and introduced a method for mitigating the disparity in precision across the entire coverage scale based on this criterion.

Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification

It is shown that the fair optimal classifier is obtained by recalibrating the Bayes classifier by a group-dependent threshold and the overall procedure is shown to be statistically consistent both in terms of the classification error and fairness measure.

Eliminating Latent Discrimination: Train Then Mask

A new operational fairness criteria is defined, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics, which effectively controls for sensitive features and provides diagnostics for deviations from fair decision making.

Fairness in Semi-Supervised Learning: Unlabeled Data Help to Reduce Discrimination

A framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data, a re-sampling method to obtain multiple fair datasets and lastly, ensemble learning to improve accuracy and decrease discrimination are presented.

Wasserstein Fair Classification

An approach to fair classification is proposed that enforces independence between the classifier outputs and sensitive information by minimizing Wasserstein-1 distances and is robust to specific choices of the threshold used to obtain class predictions from model outputs.
...

References

SHOWING 1-10 OF 26 REFERENCES

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

On the relation between accuracy and fairness in binary classification

It is argued that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different ratesof positive predictions.

Discrimination-aware data mining

This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.

Learning Fair Classifiers

This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines.

Building Classifiers with Independency Constraints

This paper studies the classification with independency constraints problem: find an accurate model for which the predictions are independent from a given binary attribute and proposes two solutions and presents an empirical validation.

Inherent Trade-Offs in the Fair Determination of Risk Scores

Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.

THE VARIATIONAL FAIR AUTOENCODER

This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation with an additional penalty term based on the “Maximum Mean Discrepancy” (MMD) measure.

Certifying and Removing Disparate Impact

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Big Data's Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.