Promoting Fairness through Hyperparameter Optimization

@article{Cruz2021PromotingFT,
  title={Promoting Fairness through Hyperparameter Optimization},
  author={Andre Ferreira Cruz and Pedro Saleiro and Catarina Bel'em and Carlos Soares and P. Bizarro},
  journal={2021 IEEE International Conference on Data Mining (ICDM)},
  year={2021},
  pages={1036-1041}
}
Considerable research effort has been guided towards algorithmic fairness but real-world adoption of bias reduction techniques is still scarce. Existing methods are either metric-or model-specific, require access to sensitive attributes at inference time, or carry high development or deployment costs. This work explores the unfairness that emerges when optimizing ML models solely for predictive performance, and how to mitigate it with a simple and easily deployed intervention: fairness-aware… 

Figures and Tables from this paper

Fair AutoML
TLDR
An end-to-end automated machine learning system to find machine learning models not only with good prediction accuracy but also fair, which includes a strategy to dynamically decide when and on which models to conduct unfairness mitigation according to the prediction accuracy, fairness and the resource consumption on the fly.

References

SHOWING 1-10 OF 57 REFERENCES
A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization
TLDR
It is shown that Fairband can efficiently navigate the fairness-accuracy trade-off through hyperparameter optimization, and consistently finds configurations attaining substantially improved fairness at a comparatively small decrease in predictive accuracy.
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
A Confidence-Based Approach for Balancing Fairness and Accuracy
TLDR
A new measure of fairness, called resilience to random bias (RRB), is proposed and demonstrated that RRB distinguishes well between the authors' naive and sensible fairness algorithms, and together with bias and accuracy provides a more complete picture of the fairness of an algorithm.
Algorithmic Decision Making and the Cost of Fairness
TLDR
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Dealing with Bias and Fairness in Data Science Systems: A Practical Hands-on Tutorial
TLDR
This hands-on tutorial tries to bridge the gap between research and practice, by deep diving into algorithmic fairness, from metrics and definitions to practical case studies, including bias audits using the Aequitas toolkit (http://github.com/dssg/aequitas).
Aequitas: A Bias and Fairness Audit Toolkit
TLDR
Aequitas is an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and Fairness metrics in relation to multiple population sub-groups.
A Reductions Approach to Fair Classification
TLDR
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Censoring Representations with an Adversary
TLDR
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.
Non-stochastic Best Arm Identification and Hyperparameter Optimization
TLDR
This work casts hyperparameter optimization as an instance of non-stochastic best-arm identification, identifies a known algorithm that is well-suited for this setting, and empirically evaluates its behavior.
...
...