• Corpus ID: 246015874

Fair Interpretable Learning via Correction Vectors

@article{Cerrato2022FairIL,
  title={Fair Interpretable Learning via Correction Vectors},
  author={Mattia Cerrato and Marius K{\"o}ppel and Alexander Segner and Stefan Kramer},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.06343}
}
Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information. Various “representation debiasing” techniques have been proposed in the literature. However, as neural networks are inherently opaque, these methods are hard to comprehend, which limits their usefulness. We propose a new framework for fair representation learning which is… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 19 REFERENCES
THE VARIATIONAL FAIR AUTOENCODER
TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation with an additional penalty term based on the “Maximum Mean Discrepancy” (MMD) measure.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Invariant Representations without Adversarial Training
TLDR
It is shown that adversarial training is unnecessary and sometimes counter-productive; this work casts invariant representation learning as a single information-theoretic objective that can be directly optimized.
Fair pairwise learning to rank
TLDR
A family of fair pairwise learning to rank approaches based on Neural Networks is presented, which are able to produce balanced outcomes for underprivileged groups and, at the same time, build fair representations of data, i.e. new vectors having no correlation with regard to a sensitive attribute.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
TLDR
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.
A Kernel Two-Sample Test
TLDR
This work proposes a framework for analyzing and comparing distributions, which is used to construct statistical tests to determine if two samples are drawn from different distributions, and presents two distribution free tests based on large deviation bounds for the maximum mean discrepancy (MMD).
Measuring Fairness in Ranked Outputs
TLDR
A data generation procedure is developed that allows for systematically control the degree of unfairness in the output, and the proposed fairness measures for ranked outputs are applied to several real datasets, and results show potential for improving fairness of ranked outputs while maintaining accuracy.
Pairwise Fairness for Ranking and Regression
We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
...
...