How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts

  title={How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts},
  author={Haotao Wang and Junyuan Hong and Jiayu Zhou and Zhangyang Wang},
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed… 

Figures and Tables from this paper

A Survey on Preserving Fairness Guarantees in Changing Environments

A taxonomy of the existing approaches for fair classifications under distribution shift is proposed, which will highlight benchmarking alternatives, point out the relation with other similar research, and eventually, identify future venues of research.



Ensuring Fairness Beyond the Training Data

This work develops classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.

Robust Fairness under Covariate Shift

This work investigates fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same, and proposes an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data.

Fairness Without Demographics in Repeated Loss Minimization

This paper develops an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution and proves that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice.

Fairness Violations and Mitigation under Covariate Shift

An approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set is specified and it is shown that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality.

Recycling Privileged Learning and Distribution Matching for Fairness

This paper sets an overarching goal to develop a unified machine learning framework that is able to handle any definitions of fairness, their combinations, and also new definitions that might be stipulated in the future, and recycles two well-established machine learning techniques, privileged learning and distribution matching, and harmonizes them for satisfying multi-faceted fairness definitions.

FARF: A Fair and Adaptive Random Forests Classifier

This paper proposes a flexible ensemble algorithm for fair decision-making in the more challenging context of evolving online settings, called FARF (Fair and Adaptive Random Forests), that also accounts for fairness and a single hyperparameters that alters fairness-accuracy balance.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Label Bias, Label Shift: Fair Machine Learning with Unreliable Labels

This work builds upon work in both fairness and distribution shift to examine the performance of fair machine learning models when the reliability of labels is uncertain and dynamic, focusing on label bias as the bias model, and label shift as the mechanism of distribution shift.

Fairness in Deep Learning: A Computational Perspective

It is shown that interpretability can serve as a useful ingredient to diagnose the reasons that lead to algorithmic discrimination in deep learning, and is discussed according to three stages of deep learning life-cycle.

Discovering Fair Representations in the Data Domain

This work proposes to cast the problem ofpretability and fairness in computer vision and machine learning applications as data-to-data translation, i.e. learning a mapping from an input domain to a fair target domain, where a fairness definition is being enforced.