Robust Fairness under Covariate Shift
@inproceedings{Rezaei2021RobustFU, title={Robust Fairness under Covariate Shift}, author={Ashkan Rezaei and Anqi Liu and Omid Memarrast and Brian D. Ziebart}, booktitle={AAAI}, year={2021} }
Making predictions that are fair with regard to protected group membership (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same this http URL practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals…
14 Citations
F AIRNESS G UARANTEES UNDER D EMOGRAPHIC S HIFT
- Computer Science
- 2022
This paper considers the impact of demographic shift and presents a class of algorithms, called Shifty algorithms, that provide high-con-fidence behavioral guarantees that hold under demographic shift when data from the deployment environment is unavailable during training.
Fair Classification under Covariate Shift and Missing Protected Attribute - an Investigation using Related Features
- BusinessArXiv
- 2022
This study investigated the problem of fair classification under Covariate Shift and missing protected attribute using a simple approach based on the use of importance-weights to handle…
Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings
- Computer Science
- 2022
This work formalizes the effect of such predictors as a type of concept shift—a particular variety of distribution shift—and shows both theoretically and via simulated examples how this causes predictors which are fair when they are trained to become unfair when they is deployed.
Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings
- Computer Science
- 2022
The effect of such predictors are formalized as a type of concept shift—a particular variety of distribution shift—and it is shown how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
Segmenting across places: The need for fair transfer learning with satellite imagery
- Environmental ScienceArXiv
- 2022
The increasing availability of high-resolution satellite imagery has enabled the use of machine learning to support land-cover measurement and inform policy-making. How-ever, labelling satellite…
Algorithm Fairness in AI for Medicine and Healthcare
- Computer Science, MedicineArXiv
- 2021
The intersectional of fairness in machine learning through the context of current issues in healthcare is summarized, and how algorithmic biases arise in current clinical workflows and their resulting healthcare disparities are outlined.
Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training
- Computer ScienceNeurIPS
- 2021
This paper conducts the first empirical study to quantify the impact of software implementation on the fairness and its variance of DL systems and calls for better fairness evaluation and testing protocols to improve fairness and fairness variance ofDL systems as well as DL research validity and reproducibility at large.
Fairness Violations and Mitigation under Covariate Shift
- Computer ScienceFAccT
- 2021
An approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set is specified and it is shown that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality.
Fairness for Robust Learning to Rank
- Computer ScienceArXiv
- 2021
This work derives a new ranking system based on the first principles of distributional robustness that provides better utility for highly fair rankings than existing baseline methods.
Federated Learning with Heterogeneous Data: A Superquantile Optimization Approach
- Computer ScienceArXiv
- 2021
This work presents a stochastic training algorithm which interleaves differentially private client reweighting steps with federated averaging steps that is supported with finite time convergence guarantees that cover both convex and non-convex settings.
References
SHOWING 1-10 OF 73 REFERENCES
Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
- Computer ScienceJ. Mach. Learn. Res.
- 2019
Algorithms that can solve non-convex constrained optimization problems with possibly non-differentiable and non- Convex constraints with theoretical guarantees are provided.
Equality of Opportunity in Supervised Learning
- Computer ScienceNIPS
- 2016
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Empirical Risk Minimization under Fairness Constraints
- Computer ScienceNeurIPS
- 2018
This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
- Computer ScienceWWW
- 2017
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.
Fairness Constraints: Mechanisms for Fair Classification
- Computer ScienceAISTATS
- 2017
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Fairness for Robust Log Loss Classification
- Computer ScienceAAAI
- 2020
This work re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization that produces a parametric exponential family conditional distribution that resembles truncated logistic regression.
Robust Classification Under Sample Selection Bias
- Computer ScienceNIPS
- 2014
This work develops a framework for learning a robust bias-aware (RBA) probabilistic classifier that adapts to different sample selection biases using a minimax estimation formulation and demonstrates the behavior and effectiveness of the approach on binary classification tasks.
A Survey on Bias and Fairness in Machine Learning
- Computer ScienceACM Comput. Surv.
- 2021
This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Equalized odds postprocessing under imperfect group information
- Computer ScienceAISTATS
- 2020
This paper investigates to what extent fairness interventions can be effective even when only imperfect information about the protected attribute is available, and identifies conditions on the perturbation that guarantee that the bias of a classifier is reduced even by running equalized odds with the perturbed attribute.
Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy
- Computer ScienceFAT*
- 2020
This paper examines ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods, and considers three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology.