• Corpus ID: 13028203

The cost of fairness in classification

  title={The cost of fairness in classification},
  author={Aditya Krishna Menon and Robert C. Williamson},
We study the problem of learning classifiers with a fairness constraint, with three main contributions towards the goal of quantifying the problem's inherent tradeoffs. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for cost-sensitive classification and fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we show how the tradeoff between accuracy and fairness is determined by the… 

Figures and Tables from this paper

Noise-tolerant fair classification

If one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity the authors only need to change the desired fairness-tolerance, and the requisite tolerance can be estimated by leveraging existing noise-rate estimators from the label noise literature.

Learning with Complex Loss Functions and Constraints

This work develops a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks.

Towards Fairness-Aware Multi-Objective Optimization

This paper starts with a discussion of user preferences in multi-Objective optimization and then explores its relationship to fairness in machine learning and multi-objective optimization, further elaborating the importance of fairness in traditional multi- Objectives optimization, data-driven optimization and federated optimization.

Unleashing Linear Optimizers for Group-Fair Learning and Optimization

It is proved that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance.

Provably Fair Representations

It is shown that it is possible to prove that a representation function is fair according to common measures of both group and individual fairness, as well as useful with respect to a target task.

Fairness in Machine Learning: A Survey

An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas.

From Aware to Fair: Tackling Bias in A.I

This paper will explore the nascent topic of algorithmic fairness by looking at the problem through the lens of classification tasks and foray into the concept of “fairness” and the different proposed definitions, and then compare and contrast proposed solutions.

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

To ensure the adversarial generalization as well as cross-task transferability, this paper proposes to couple the operations of target task classifier training, bias task classifiers training, and adversarial example generation to supplement the target task training dataset via balancing the distribution over bias variables in an online fashion.

Hierarchical VampPrior Variational Fair Auto-Encoder

This paper proposes to use deep generative modeling and adapt a hierarchical Variational Auto-Encoder to learn fair representations that aim at removing nuisance (sensitive) information from the decision process.

Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination

Today's legal requirements and corporate practices, while highly inconsistent across domains, offer lessons for how to approach the collection and inference of sensitive data in appropriate circumstances.



Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fairness-Aware Classifier with Prejudice Remover Regularizer

A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.

Equality of Opportunity in Supervised Learning

This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.

Impartial Predictive Modeling: Ensuring Fairness in Arbitrary Models

This work provides a framework for impartiality by accounting for different perspectives on the data generating process and yields a set of impartial estimates that are applicable in a wide variety of situations and post-processing tools to correct estimates from arbitrary models.

Algorithmic Decision Making and the Cost of Fairness

This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.

Optimizing F-Measures by Cost-Sensitive Classification

A general reduction of F-measure maximization to cost-sensitive classification with unknown costs is presented and an algorithm with provable guarantees to obtain an approximately optimal classifier for the F-measures is proposed by solving a series of cost- sensitive classification problems.

The Foundations of Cost-Sensitive Learning

It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.

Learning Fair Classifiers

This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines.

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.