Achieving Downstream Fairness with Geometric Repair

@article{KwegyirAggrey2022AchievingDF,
  title={Achieving Downstream Fairness with Geometric Repair},
  author={Kweku Kwegyir-Aggrey and Jessica Dai and John Dickerson and Keegan E. Hines},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.07490}
}
We study a fair machine learning (ML) setting where an ‘upstream’ model developer is tasked with producing a fair ML model that will be used by several similar but distinct ‘downstream’ users. This setting introduces new challenges that are unaddressed by many existing fairness interventions, echoing existing critiques that current methods are not broadly applicable across the diversifying needs of real-world fair ML use cases. To this end, we address the up/down stream setting by adopting a… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 70 REFERENCES

Fairness without Demographics through Adversarially Reweighted Learning

The proposed Adversarially Reweighted Learning (ARL) hypothesizes that non-protected features and task labels are valuable for identifying fairness issues, and can be used to co-train an adversarial reweighting approach for improving fairness.

Fair classification and social welfare

This paper presents a welfare-based analysis of fair classification regimes and shows that applying stricter fairness criteria codified as parity constraints can worsen welfare outcomes for both groups.

Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research

The intended goal of this work may be to improve the fairness of machine learning models, but it is argued that unexamined, implicit assumptions can in fact result in emergent unfairness.

Retiring Adult: New Datasets for Fair Machine Learning

A suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning and create prediction tasks relating to income, employment, health, transportation, and housing are created.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

A comparative study of fairness-enhancing interventions in machine learning

It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.

Individual Fairness in Pipelines

This work investigates individual fairness under pipeline composition and shows that naive auditing is unable to uncover systematic unfairness and that, in order to ensure fairness, some form of dependence must exist between the design of algorithms at different stages in the pipeline.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fair regression via plug-in estimator and recalibration with statistical guarantees

This work studies the problem of learning an optimal regression function subject to a fairness constraint by leveraging on a proxy-discretized version, for which an explicit expression of the optimal fair predictor is derived.
...