• Corpus ID: 220683906

Minimax Pareto Fairness: A Multi Objective Perspective

@article{Martnez2020MinimaxPF,
  title={Minimax Pareto Fairness: A Multi Objective Perspective},
  author={Natalia Mart{\'i}nez and Mart{\'i}n Bertr{\'a}n and Guillermo Sapiro},
  journal={Proceedings of machine learning research},
  year={2020},
  volume={119},
  pages={
          6755-6764
        }
}
In this work we formulate and formally characterize group fairness as a multi-objective optimization problem, where each sensitive group risk is a separate objective. We propose a fairness criterion where a classifier achieves minimax risk and is Pareto-efficient w.r.t. all groups, avoiding unnecessary harm, and can lead to the best zero-gap model if policy dictates so. We provide a simple optimization algorithm compatible with deep neural networks to satisfy these constraints. Since our method… 

Figures and Tables from this paper

Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
TLDR
This paper proposes Pareto efficient Fairness (PEF) as a suitable fairness notion for supervised learning, that can ensure the optimal trade-off between overall loss and other fairness criteria, and empirically demonstrates the effectiveness of the PEF solution and the extracted Pare to frontier on real-world datasets.
Multi-Fair Pareto Boosting
TLDR
A new fairness notion is introduced, Multi-Max Mistreatment (MMM), which measures unfairness while considering both (multiattribute) protected group and class membership of instances, and a multi-objective problem formulation is proposed to learn an MMM -fair classifier.
Blind Pareto Fairness and Subgroup Robustness
TLDR
The proposed Blind Pareto Fairness (BPF) is a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service.
The Sharpe predictor for fairness in machine learning
TLDR
A new paradigm for fair machine learning based on Stochastic MultiObjective Optimization (SMOO), where accuracy and fairness metrics stand as conflicting objectives to be optimized simultaneously, is introduced, which allows defining and computing new meaningful predictors for FML.
Minimax Group Fairness: Algorithms and Experiments
TLDR
This framework provides provably convergent oracle-efficient learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness and shows empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
Convergent Algorithms for (Relaxed) Minimax Fairness
TLDR
This framework provides provably convergent $\textit{oracle-efficient}$ learning algorithms (or equivalently, reductions to non-fair learning) for minimax group fairness, with the goal of minimizing the maximum loss across all groups, rather than equalizing group losses.
To the Fairness Frontier and Beyond: Identifying, Quantifying, and Optimizing the Fairness-Accuracy Pareto Frontier
TLDR
This paper identifies and outlines the empirical Pareto frontier of the fairness-accuracy tradeoff, and develops a novel fair model stacking framework, FairStacks, which leads to major improvements in fauc that outperform existing algorithmic fairness approaches.
Towards Fairness-Aware Multi-Objective Optimization
TLDR
This paper starts with a discussion of user preferences in multi-Objective optimization and then explores its relationship to fairness in machine learning and multi-objective optimization, further elaborating the importance of fairness in traditional multi- Objectives optimization, data-driven optimization and federated optimization.
Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
TLDR
This paper proposes a new set of metrics to better capture the multi-dimensional Pareto frontier of fairness-accuracy trade-offs uniquely presented in a multi-task learning setting, and proposes a Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi- task learning.
Multi-fairness under class-imbalance
TLDR
A new fairness measure is introduced, Multi-Max Mistreatment ( MMM), which considers both (multi-attribute) protected group and class membership of instances to measure discrimination and proposes Multi-Fair Boosting Post Pareto (MFBPP) a boosting approach that incorporates MMM -costs in the distribution update and post-training, selects the optimal trade-off among accurate, class-balanced, and fair solutions.
...
...

References

SHOWING 1-10 OF 38 REFERENCES
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Rawlsian Fairness for Machine Learning
TLDR
This work studies a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity", and gives an algorithm that satisfies this fairness constraint, while still being able to learn at a rate comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint.
A Reductions Approach to Fair Classification
TLDR
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Fairness without Harm: Decoupled Classifiers with Preference Guarantees
TLDR
It is argued that when there is this kind of treatment disparity then it should be in the best interest of each group, and a recursive procedure is introduced that adaptively selects group attributes for decoupling to ensure preference guarantees in terms of generalization error.
Taking Advantage of Multitask Learning for Fair Classification
TLDR
This paper proposes to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups and proposes a three-pronged approach to tackle fairness, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing.
Fairness Without Demographics in Repeated Loss Minimization
TLDR
This paper develops an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution and proves that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
Robust Optimization for Non-Convex Objectives
TLDR
A reduction from robust improper optimization to Bayesian optimization is developed: given an oracle that returns $\alpha$-approximate solutions for distributions over objectives, it is shown that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks.
Fairness-Aware Classifier with Prejudice Remover Regularizer
TLDR
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.
...
...