• Corpus ID: 244478100

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

@article{Miroshnikov2021ModelagnosticBM,
  title={Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics},
  author={Alexey Miroshnikov and Konstandinos Kotsiopoulos and Ryan Franks and Arjun Ravi Kannan},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.11259}
}
This article is a companion paper to our earlier work Miroshnikov et al. (2021) on fairness interpretability, which introduces bias explanations. In the current work, we propose a bias mitigation methodology based upon the construction of post-processed models with fairer regressor distributions for Wasserstein-based fairness metrics. By identifying the list of predictors contributing the most to the bias, we reduce the dimensionality of the problem by mitigating the bias originating from those… 

Figures from this paper

Wasserstein-based fairness interpretability framework for machine learning models
TLDR
A fairness interpretability framework for measuring and explaining bias in classification and regression models at the level of a distribution is introduced and bias predictor attributions called bias explanations are introduced.

References

SHOWING 1-10 OF 31 REFERENCES
Wasserstein-based fairness interpretability framework for machine learning models
TLDR
A fairness interpretability framework for measuring and explaining bias in classification and regression models at the level of a distribution is introduced and bias predictor attributions called bias explanations are introduced.
Identifying and Correcting Label Bias in Machine Learning
TLDR
This paper provides a mathematical formulation of how this bias can arise by assuming the existence of underlying, unknown, and unbiased labels which are overwritten by an agent who intends to provide accurate labels but may have biases against certain groups.
Fairness without Demographics through Adversarially Reweighted Learning
TLDR
The proposed Adversarially Reweighted Learning (ARL) hypothesizes that non-protected features and task labels are valuable for identifying fairness issues, and can be used to co-train an adversarial reweighting approach for improving fairness.
Fairness Without Demographics in Repeated Loss Minimization
TLDR
This paper develops an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution and proves that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice.
Obtaining Fairness using Optimal Transport Theory
TLDR
The goals of this paper are to detect when a binary classification rule lacks fairness and to try to fight against the potential discrimination attributable to it by modifying either the classifiers or the data itself.
True to the Model or True to the Data?
TLDR
It is argued that the choice comes down to whether it is desirable to be true to the model ortrue to the data, and how possible attributions are impacted by modeling choices.
Mitigating Unwanted Biases with Adversarial Learning
TLDR
This work presents a framework for mitigating biases concerning demographic groups by including a variable for the group of interest and simultaneously learning a predictor and an adversary, which results in accurate predictions that exhibit less evidence of stereotyping Z.
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Certifying and Removing Disparate Impact
TLDR
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
...
...