• Corpus ID: 13028203

The cost of fairness in classification

@article{Menon2017TheCO,
  title={The cost of fairness in classification},
  author={Aditya Krishna Menon and Robert C. Williamson},
  journal={ArXiv},
  year={2017},
  volume={abs/1705.09055}
}
We study the problem of learning classifiers with a fairness constraint, with three main contributions towards the goal of quantifying the problem's inherent tradeoffs. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for cost-sensitive classification and fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we show how the tradeoff between accuracy and fairness is determined by the… 

Figures and Tables from this paper

Noise-tolerant fair classification

If one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity the authors only need to change the desired fairness-tolerance, and the requisite tolerance can be estimated by leveraging existing noise-rate estimators from the label noise literature.

Learning with Complex Loss Functions and Constraints

This work develops a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks.

Towards Fairness-Aware Multi-Objective Optimization

This paper starts with a discussion of user preferences in multi-Objective optimization and then explores its relationship to fairness in machine learning and multi-objective optimization, further elaborating the importance of fairness in traditional multi- Objectives optimization, data-driven optimization and federated optimization.

Unleashing Linear Optimizers for Group-Fair Learning and Optimization

It is proved that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance.

Provably Fair Representations

It is shown that it is possible to prove that a representation function is fair according to common measures of both group and individual fairness, as well as useful with respect to a target task.

Fairness in Machine Learning: A Survey

An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas.

From Aware to Fair: Tackling Bias in A.I

This paper will explore the nascent topic of algorithmic fairness by looking at the problem through the lens of classification tasks and foray into the concept of “fairness” and the different proposed definitions, and then compare and contrast proposed solutions.

When optimizing nonlinear objectives is no harder than linear objectives

It is proved that, from a computational perspective, fairly general families of complex objectives are not significantly harder to optimize than standard averages, by providing polynomial-time reductions, i.e., algorithms that optimize complex objectives using linear optimizers.

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

To ensure the adversarial generalization as well as cross-task transferability, this paper proposes to couple the operations of target task classifier training, bias task classifiers training, and adversarial example generation to supplement the target task training dataset via balancing the distribution over bias variables in an online fashion.

Retiring $\Delta$DP: New Distribution-Level Metrics for Demographic Parity

Two new fairness metrics are proposed, A rea B etween P robability density function C urves ( ABPC ) and a rea A etween C umulative density functionC uraches ( ABCC), to precisely measure the violation of demographic parity in distribution level.

References

SHOWING 1-10 OF 42 REFERENCES

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fairness-Aware Classifier with Prejudice Remover Regularizer

A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.

Impartial Predictive Modeling: Ensuring Fairness in Arbitrary Models

This work provides a framework for impartiality by accounting for different perspectives on the data generating process and yields a set of impartial estimates that are applicable in a wide variety of situations and post-processing tools to correct estimates from arbitrary models.

Algorithmic Decision Making and the Cost of Fairness

This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.

Optimizing F-Measures by Cost-Sensitive Classification

A general reduction of F-measure maximization to cost-sensitive classification with unknown costs is presented and an algorithm with provable guarantees to obtain an approximately optimal classifier for the F-measures is proposed by solving a series of cost- sensitive classification problems.

Learning Fair Classifiers

This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines.

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.

Three naive Bayes approaches for discrimination-free classification

Three approaches for making the naive Bayes classifier discrimination-free are presented: modifying the probability of the decision being positive, training one model for every sensitive attribute value and balancing them, and adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization.

Information, Divergence and Risk for Binary Experiments

The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates maximum mean discrepancy to Fisher linear discriminants.