Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

@article{An2022TransferringFU,
  title={Transferring Fairness under Distribution Shifts via Fair Consistency Regularization},
  author={Bang An and Zora Che and Mucong Ding and Furong Huang},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.12796}
}
The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations. Although there has been a surge of work that improves algorithmic fairness, most of them are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In… 

Improving Fair Training under Correlation Shifts

A novel pre-processing step that samples the input data to reduce correlation shifts and thus enables the in-processing approaches to overcome their limitations, and formulate an optimization problem for adjusting the data ratio among labels and sensitive groups to reflect the shifted correlation.

Fairness Transferability Subject to Bounded Distribution Shift

A framework for bounding violations of statistical fairness subject to distribution shift is developed, formulating a generic upper bound for transferred fairness violations, and it is shown that fairness violation bounds in practice are able to be estimated in practice.

Weight Perturbation Can Help Fairness under Distribution Shift

Robust fairness regularization (RFR) is proposed by considering the worst case within the weight perturbation ball for each sensitive attribute group, and the maximization problem can be simplified as two forward and two backward propagations for each update of model parameters.

Retiring $\Delta$DP: New Distribution-Level Metrics for Demographic Parity

Two new fairness metrics are proposed, A rea B etween P robability density function C urves ( ABPC ) and a rea A etween C umulative density functionC uraches ( ABCC), to precisely measure the violation of demographic parity in distribution level.

A Survey on Preserving Fairness Guarantees in Changing Environments

A taxonomy of the existing approaches for fair classifications under distribution shift is proposed, which will highlight benchmarking alternatives, point out the relation with other similar research, and eventually, identify future venues of research.

References

SHOWING 1-10 OF 76 REFERENCES

Fairness Transferability Subject to Bounded Distribution Shift

A framework for bounding violations of statistical fairness subject to distribution shift is developed, formulating a generic upper bound for transferred fairness violations, and it is shown that fairness violation bounds in practice are able to be estimated in practice.

Joint Transfer of Model Knowledge and Fairness Over Domains Using Wasserstein Distance

Experimental results show that the investigated methodology for developing a fair classification model for data with limited or no labels does indeed promote fairness for the target domain, while retaining reasonable classification accuracy, and that it often outperforms comparative models in terms of joint fairness.

Robust Fairness under Covariate Shift

This work investigates fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same, and proposes an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data.

Fairness Violations and Mitigation under Covariate Shift

An approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set is specified and it is shown that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality.

Transfer of Machine Learning Fairness across Domains

This work offers new theoretical guarantees of improving fairness across domains, and offers a modeling approach to transfer to data-sparse target domains and gives empirical results validating the theory and showing that these modeling approaches can improve fairness metrics with less data.

Maintaining fairness across distribution shift: do we have viable solutions for real-world applications?

Fairness and robustness are often considered as orthogonal dimensions when evaluating machine learning models. However, recent work has revealed interactions between fairness and robustness, showing

Fair Mixup: Fairness via Interpolation

Fair mixup is proposed, a new data augmentation strategy for imposing the fairness constraint and it is shown that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.

Ensuring Fairness Beyond the Training Data

This work develops classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.

Learning Controllable Fair Representations

Exploiting duality, this work introduces a method that optimizes the model parameters as well as the expressiveness-fairness trade-off and achieves higher expressiveness at a lower computational cost.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
...