Convergent Algorithms for (Relaxed) Minimax Fairness
@article{Diana2020ConvergentAF, title={Convergent Algorithms for (Relaxed) Minimax Fairness}, author={Emily Diana and Wesley Gill and Michael Kearns and Krishnaram Kenthapadi and Aaron Roth}, journal={ArXiv}, year={2020}, volume={abs/2011.03108} }
We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard $\textit{difference}$ between group outcomes. In this framework we provide provably convergent $\textit{oracle-efficient}$ learning algorithms (or equivalently, reductions to non-fair learning) for $\textit{minimax group fairness}$. Here the goal is that of minimizing the maximum loss across all groups, rather than equalizing group losses. Our…
10 Citations
Lexicographically Fair Learning: Algorithms and Generalization
- Computer ScienceFORC
- 2021
A notion of approximate lexifairness is given that avoids this issue, and oracle-efficient algorithms for finding approximately lexif air solutions in a very general setting are derived, which are provably efficient even in the worst case.
Blind Pareto Fairness and Subgroup Robustness
- Computer ScienceICML
- 2021
The proposed Blind Pareto Fairness (BPF) is a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service.
Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
- Computer ScienceKDD
- 2021
This paper proposes a new set of metrics to better capture the multi-dimensional Pareto frontier of fairness-accuracy trade-offs uniquely presented in a multi-task learning setting, and proposes a Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi- task learning.
Fairness Measures for Machine Learning in Finance
- Computer ScienceThe Journal of Financial Data Science
- 2021
A machine learning pipeline for fairness-aware machine learning (FAML) in finance that encompasses metrics for fairness (and accuracy) is presented and a range of metrics is considered for machine learning applications in finance, both pre-training and post-training of models.
Minimax Demographic Group Fairness in Federated Learning
- Computer Science
- 2022
This work provides an optimization algorithm – FedMinMax – for solving the proposed problem that provably enjoys the performance guarantees of centralized learning algorithms in federated learning scenarios where different participating entities may only have access to a subset of the population groups during the training phase.
Technical Challenges for Training Fair Neural Networks
- Computer ScienceArXiv
- 2021
It is observed that these large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
Distributionally Robust Data Join
- Computer Science
- 2022
This work introduces the problem of building a predictor which minimizes the maximum loss over all probability distributions over the original features, auxiliary features, and binary labels, whose Wasserstein distance is r1 away from the empirical distribution over the labeled dataset and r2 away from that of the unlabeled dataset.
Comparing Human and Machine Bias in Face Recognition
- Computer ScienceArXiv
- 2021
Improvements to the LFW and CelebA datasets are released which will enable future researchers to obtain measurements of algorithmic bias that are not tainted by major flaws in the dataset.
Robustness Disparities in Commercial Face Detection
- Computer ScienceArXiv
- 2021
This work presents the first of its kind detailed benchmark of the robustness of two facial detection and analysis systems: Amazon Rekognition and Microsoft Azure under noisy natural perturbation.
Towards Unbiased and Accurate Deferral to Multiple Experts
- Computer ScienceAIES
- 2021
This work proposes a framework that simultaneously learns a classifier and a deferredral system, with the deferral system choosing to defer to one or more human experts in cases of input where the classifier has low confidence.
References
SHOWING 1-10 OF 29 REFERENCES
Fair Regression: Quantitative Definitions and Reduction-based Algorithms
- Computer ScienceICML
- 2019
This paper studies the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race, and proposes general schemes for fair regression under two notions of fairness.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
- Computer ScienceICML
- 2018
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
An Empirical Study of Rich Subgroup Fairness for Machine Learning
- Computer ScienceFAT
- 2019
In general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness.
Minimax Pareto Fairness: A Multi Objective Perspective
- Computer ScienceICML
- 2020
This work proposes a fairness criterion where a classifier achieves minimax risk and is Pareto-efficient w.r.t. all groups, avoiding unnecessary harm, and can lead to the best zero-gap model if policy dictates so and provides a simple optimization algorithm compatible with deep neural networks to satisfy these constraints.
A Reductions Approach to Fair Classification
- Computer ScienceICML
- 2018
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Fairness without Harm: Decoupled Classifiers with Preference Guarantees
- Computer ScienceICML
- 2019
It is argued that when there is this kind of treatment disparity then it should be in the best interest of each group, and a recursive procedure is introduced that adaptively selects group attributes for decoupling to ensure preference guarantees in terms of generalization error.
Round-Robin Scheduling for Max-Min Fairness in Data Networks
- BusinessIEEE J. Sel. Areas Commun.
- 1991
The results suggest that the transmission capacity not used by the small window session will be approximately fairly divided among the large window sessions, and the worst-case performance of round-robin scheduling with windows is shown to approach limits that are perfectly fair in the max-min sense.
A decision-theoretic generalization of on-line learning and an application to boosting
- Computer ScienceEuroCOLT
- 1995
The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
The Price of Fair PCA: One Extra Dimension
- Computer ScienceNeurIPS
- 2018
The notion of Fair PCA is defined and a polynomial-time algorithm for finding a low dimensional representation of the data which is nearly-optimal with respect to this measure is given.
Online Convex Programming and Generalized Infinitesimal Gradient Ascent
- Computer ScienceICML
- 2003
An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.