# Lexicographically Fair Learning: Algorithms and Generalization

@inproceedings{Diana2021LexicographicallyFL,
title={Lexicographically Fair Learning: Algorithms and Generalization},
author={Emily Diana and Wesley Gill and Ira Globus-Harris and Michael Kearns and Aaron Roth and Saeed Sharifi-Malvajerdi},
booktitle={FORC},
year={2021}
}
• Published in FORC 16 February 2021
• Computer Science
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions…
4 Citations
Beyond the Frontier: Fairness Without Accuracy Loss
• Computer Science
ArXiv
• 2022
A simple algorithmic framework that allows us to deploy models and then revise them dynamically when groups are discovered on which the error rate is suboptimal is developed, and the result is provably fast convergence to a model that cannot be distinguished from the Bayes optimal predictor — at least by the party tasked with finding high error groups.
Multiaccurate Proxies for Downstream Fairness
• Computer Science
ArXiv
• 2021
This work adopts a fairness pipeline perspective, and shows that obeying multiaccuracy constraints with respect to the downstream model class suffices for this purpose, and provides sampleand oracle efficient-algorithms and generalization bounds for learning such proxies.
Achieving Downstream Fairness with Geometric Repair
• Computer Science
ArXiv
• 2022
This work presents a technique that specifically addresses the setting where a protected attribute takes on multiple values, by post-processing a regressor’s scores such they yield fair classifications for any downstream choice in decision threshold.
An Algorithmic Framework for Bias Bounties
• Computer Science
• 2022
An algorithmic framework for "bias bounties" — events in which external participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security, which allows participants to submit arbitrary subgroup improvements, which are algorithmically incorporated into an updated model.

## References

SHOWING 1-10 OF 38 REFERENCES
Convergent Algorithms for (Relaxed) Minimax Fairness
• Computer Science
ArXiv
• 2020
This framework provides provably convergent $\textit{oracle-efficient}$ learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness, with the goal of minimizing the maximum loss across all groups, rather than equalizing group losses.
Average Individual Fairness: Algorithms, Generalization and Experiments
• Computer Science
NeurIPS
• 2019
This work designs an oracle-efficient algorithm for the fair empirical risk minimization task and shows that given sufficiently many samples, the ERM solution generalizes in two directions: both to new individuals, and to new classification tasks, drawn from their corresponding distributions.
An Empirical Study of Rich Subgroup Fairness for Machine Learning
• Computer Science
FAT
• 2019
In general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
• Computer Science
ICML
• 2018
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
Minimax Pareto Fairness: A Multi Objective Perspective
• Computer Science
ICML
• 2020
This work proposes a fairness criterion where a classifier achieves minimax risk and is Pareto-efficient w.r.t. all groups, avoiding unnecessary harm, and can lead to the best zero-gap model if policy dictates so and provides a simple optimization algorithm compatible with deep neural networks to satisfy these constraints.
Upward Max Min Fairness
• Computer Science
2012 Proceedings IEEE INFOCOM
• 2012
Upward Max- Min Fairness is introduced, a novel relaxation of Max-Min Fairness and a family of simple dynamics that converge to it and an efficient combinatorial algorithm for finding an upward max-min fair allocation is presented, which is a natural extension of the well known Water Filling Algorithm for a multiple path setting.
Fairness without Harm: Decoupled Classifiers with Preference Guarantees
• Computer Science
ICML
• 2019
It is argued that when there is this kind of treatment disparity then it should be in the best interest of each group, and a recursive procedure is introduced that adaptively selects group attributes for decoupling to ensure preference guarantees in terms of generalization error.
A Reductions Approach to Fair Classification
• Computer Science
ICML
• 2018
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Two-Player Games for Efficient Non-Convex Constrained Optimization
• Computer Science
ALT
• 2019
It is proved that this proxy-Lagrangian formulation, instead of having unbounded size, can be taken to be a distribution over no more than m+1 models (where m is the number of constraints), which is a significant improvement in practical terms.
Efficient Algorithms for Online Decision Problems
• Computer Science
COLT
• 2003
It is shown that a very simple idea, used in Hannan's seminal 1957 paper, gives efficient solutions to all of these problems, including a (1+∈)-competitive algorithm as well as a lazy one that rarely switches between decisions.