The cost of fairness in classification
@article{Menon2017TheCO, title={The cost of fairness in classification}, author={Aditya Krishna Menon and Robert C. Williamson}, journal={ArXiv}, year={2017}, volume={abs/1705.09055} }
We study the problem of learning classifiers with a fairness constraint, with three main contributions towards the goal of quantifying the problem's inherent tradeoffs. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for cost-sensitive classification and fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we show how the tradeoff between accuracy and fairness is determined by the…
15 Citations
Noise-tolerant fair classification
- Computer ScienceNeurIPS
- 2019
If one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity the authors only need to change the desired fairness-tolerance, and the requisite tolerance can be estimated by leveraging existing noise-rate estimators from the label noise literature.
Learning with Complex Loss Functions and Constraints
- Computer ScienceAISTATS
- 2018
This work develops a general approach for solving constrained classification problems, where the loss and constraints are defined in terms of a general function of the confusion matrix, and reduces the constrained learning problem to a sequence of cost-sensitive learning tasks.
Towards Fairness-Aware Multi-Objective Optimization
- Computer ScienceArXiv
- 2022
This paper starts with a discussion of user preferences in multi-Objective optimization and then explores its relationship to fairness in machine learning and multi-objective optimization, further elaborating the importance of fairness in traditional multi- Objectives optimization, data-driven optimization and federated optimization.
Unleashing Linear Optimizers for Group-Fair Learning and Optimization
- Computer ScienceCOLT
- 2018
It is proved that, from a computational perspective, optimizing arbitrary objectives that take into account performance over a small number of groups is not significantly harder to optimize than average performance.
Provably Fair Representations
- Computer ScienceArXiv
- 2017
It is shown that it is possible to prove that a representation function is fair according to common measures of both group and individual fairness, as well as useful with respect to a target task.
Fairness in Machine Learning: A Survey
- Computer ScienceArXiv
- 2020
An overview of the different schools of thought and approaches to mitigating (social) biases and increase fairness in the Machine Learning literature is provided, organises approaches into the widely accepted framework of pre-processing, in- processing, and post-processing methods, subcategorizing into a further 11 method areas.
From Aware to Fair: Tackling Bias in A.I
- Computer Science
- 2021
This paper will explore the nascent topic of algorithmic fairness by looking at the problem through the lens of classification tasks and foray into the concept of “fairness” and the different proposed definitions, and then compare and contrast proposed solutions.
When optimizing nonlinear objectives is no harder than linear objectives
- Computer ScienceArXiv
- 2018
It is proved that, from a computational perspective, fairly general families of complex objectives are not significantly harder to optimize than standard averages, by providing polynomial-time reductions, i.e., algorithms that optimize complex objectives using linear optimizers.
Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing
- Computer ScienceACM Multimedia
- 2020
To ensure the adversarial generalization as well as cross-task transferability, this paper proposes to couple the operations of target task classifier training, bias task classifiers training, and adversarial example generation to supplement the target task training dataset via balancing the distribution over bias variables in an online fashion.
Retiring $\Delta$DP: New Distribution-Level Metrics for Demographic Parity
- Computer Science
- 2023
Two new fairness metrics are proposed, A rea B etween P robability density function C urves ( ABPC ) and a rea A etween C umulative density functionC uraches ( ABCC), to precisely measure the violation of demographic parity in distribution level.
References
SHOWING 1-10 OF 42 REFERENCES
Fairness through awareness
- Computer ScienceITCS '12
- 2012
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Learning Fair Representations
- Computer ScienceICML
- 2013
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the…
Fairness-Aware Classifier with Prejudice Remover Regularizer
- Computer ScienceECML/PKDD
- 2012
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.
Impartial Predictive Modeling: Ensuring Fairness in Arbitrary Models
- Computer Science
- 2016
This work provides a framework for impartiality by accounting for different perspectives on the data generating process and yields a set of impartial estimates that are applicable in a wide variety of situations and post-processing tools to correct estimates from arbitrary models.
Algorithmic Decision Making and the Cost of Fairness
- Computer ScienceKDD
- 2017
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Optimizing F-Measures by Cost-Sensitive Classification
- Computer ScienceNIPS
- 2014
A general reduction of F-measure maximization to cost-sensitive classification with unknown costs is presented and an algorithm with provable guarantees to obtain an approximately optimal classifier for the F-measures is proposed by solving a series of cost- sensitive classification problems.
Learning Fair Classifiers
- Computer Science
- 2015
This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
- Computer ScienceWWW
- 2017
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.
Three naive Bayes approaches for discrimination-free classification
- Computer ScienceData Mining and Knowledge Discovery
- 2010
Three approaches for making the naive Bayes classifier discrimination-free are presented: modifying the probability of the decision being positive, training one model for every sensitive attribute value and balancing them, and adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization.
Information, Divergence and Risk for Binary Experiments
- Computer ScienceJ. Mach. Learn. Res.
- 2011
The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates maximum mean discrepancy to Fisher linear discriminants.