• Corpus ID: 222290556

Robust Fairness under Covariate Shift

@inproceedings{Rezaei2021RobustFU,
  title={Robust Fairness under Covariate Shift},
  author={Ashkan Rezaei and Anqi Liu and Omid Memarrast and Brian D. Ziebart},
  booktitle={AAAI},
  year={2021}
}
Making predictions that are fair with regard to protected group membership (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same this http URL practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals… 

Figures and Tables from this paper

F AIRNESS G UARANTEES UNDER D EMOGRAPHIC S HIFT
TLDR
This paper considers the impact of demographic shift and presents a class of algorithms, called Shifty algorithms, that provide high-con-fidence behavioral guarantees that hold under demographic shift when data from the deployment environment is unavailable during training.
Fair Classification under Covariate Shift and Missing Protected Attribute - an Investigation using Related Features
This study investigated the problem of fair classification under Covariate Shift and missing protected attribute using a simple approach based on the use of importance-weights to handle
Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings
TLDR
This work formalizes the effect of such predictors as a type of concept shift—a particular variety of distribution shift—and shows both theoretically and via simulated examples how this causes predictors which are fair when they are trained to become unfair when they is deployed.
Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings
TLDR
The effect of such predictors are formalized as a type of concept shift—a particular variety of distribution shift—and it is shown how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes.
Segmenting across places: The need for fair transfer learning with satellite imagery
The increasing availability of high-resolution satellite imagery has enabled the use of machine learning to support land-cover measurement and inform policy-making. How-ever, labelling satellite
Algorithm Fairness in AI for Medicine and Healthcare
TLDR
The intersectional of fairness in machine learning through the context of current issues in healthcare is summarized, and how algorithmic biases arise in current clinical workflows and their resulting healthcare disparities are outlined.
Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training
TLDR
This paper conducts the first empirical study to quantify the impact of software implementation on the fairness and its variance of DL systems and calls for better fairness evaluation and testing protocols to improve fairness and fairness variance ofDL systems as well as DL research validity and reproducibility at large.
Fairness Violations and Mitigation under Covariate Shift
TLDR
An approach based on feature selection that exploits conditional independencies in the data to estimate accuracy and fairness metrics for the test set is specified and it is shown that for specific fairness definitions, the resulting model satisfies a form of worst-case optimality.
Fairness for Robust Learning to Rank
TLDR
This work derives a new ranking system based on the first principles of distributional robustness that provides better utility for highly fair rankings than existing baseline methods.
Federated Learning with Heterogeneous Data: A Superquantile Optimization Approach
TLDR
This work presents a stochastic training algorithm which interleaves differentially private client reweighting steps with federated averaging steps that is supported with finite time convergence guarantees that cover both convex and non-convex settings.
...
1
2
...

References

SHOWING 1-10 OF 73 REFERENCES
Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
TLDR
Algorithms that can solve non-convex constrained optimization problems with possibly non-differentiable and non- Convex constraints with theoretical guarantees are provided.
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Empirical Risk Minimization under Fairness Constraints
TLDR
This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
TLDR
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Fairness for Robust Log Loss Classification
TLDR
This work re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization that produces a parametric exponential family conditional distribution that resembles truncated logistic regression.
Robust Classification Under Sample Selection Bias
TLDR
This work develops a framework for learning a robust bias-aware (RBA) probabilistic classifier that adapts to different sample selection biases using a minimax estimation formulation and demonstrates the behavior and effectiveness of the approach on binary classification tasks.
A Survey on Bias and Fairness in Machine Learning
TLDR
This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Equalized odds postprocessing under imperfect group information
TLDR
This paper investigates to what extent fairness interventions can be effective even when only imperfect information about the protected attribute is available, and identifies conditions on the perturbation that guarantee that the bias of a classifier is reduced even by running equalized odds with the perturbed attribute.
Towards fairer datasets: filtering and balancing the distribution of the people subtree in the ImageNet hierarchy
TLDR
This paper examines ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods, and considers three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology.
...
1
2
3
4
5
...