• Corpus ID: 49558198

Gradient Reversal Against Discrimination

@article{Raff2018GradientRA,
  title={Gradient Reversal Against Discrimination},
  author={Edward Raff and Jared Sylvester},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.00392}
}
No methods currently exist for making arbitrary neural networks fair. In this work we introduce GRAD, a new and simplified method to producing fair neural networks that can be used for auto-encoding fair representations or directly with predictive networks. It is easy to implement and add to existing architectures, has only one (insensitive) hyper-parameter, and provides improved individual and group fairness. We use the flexibility of GRAD to demonstrate multi-attribute protection. 

Figures and Tables from this paper

Can Active Learning Preemptively Mitigate Fairness Issues?

TLDR
This paper study whether models trained with uncertainty-based AL heuristics such as BALD are fairer in their decisions with respect to a protected class than those trained with identically independently distributed (i.i.d.) sampling.

Adversarial Removal of Demographic Attributes Revisited

TLDR
It is shown that a diagnostic classifier trained on the biased baseline neural network also does not generalize to new samples, indicating that it relies on correlations specific to their particular data sample.

The Accuracy Paradox of Algorithmic Classification

TLDR
It is argued that any classification produces a marginalized group, namely those that are misclassified, and in tandem the ability of the affected to challenge the classification is diminished, paradoxically contradicting the promissory narrative of ’fixing’ algorithms through optimizing fairness and accuracy.

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

TLDR
It is demonstrated on this dataset, for a number of facial attribute classification tasks, that the algorithm can be used to remove racial biases from the network feature representation.

SensitiveNets: Learning Agnostic Representations with Application to Face Images

TLDR
A novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data, based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective.

Detecting and Preventing Shortcut Learning for Fair Medical AI using Shortcut Testing (ShorT)

TLDR
This paper proposes a practical method for practitioners to assess and mitigate sho cut learning as a pa of the routine fairness assessment of clinical ML systems, and demonstrates its application to clinical tasks in radiology and dermatology.

SensitiveNets: Unlearning Undesired Information for Generating Agnostic Representations with Application to Face Recognition

TLDR
This work proposes a new neural network feature representation that help to leave out sensitive information in the decision-making process of pattern recognition and machine learning algorithms based on a triplet loss learning generalization.

Translating from Unfair to Fair Embeddings

Machine learning researchers often think of images as continuous data and language as discrete data with tokens representing lookup indices into an embedding table. However, in the social media

Discovering and Controlling for Latent Confounds in Text Classification Using Adversarial Domain Adaptation

TLDR
The approach first uses neural network-based topic modeling to discover potential confounds that differ between training and testing data, then uses adversarial training to fit a classification model that is invariant to these discovered confounds.

SensitiveNets: Learning Agnostic Representations with Application to Face Recognition

TLDR
This work proposes a new neural network feature representation that help to leave out sensitive information in the decision-making process of pattern recognition and machine learning algorithms based on a triplet loss learning generalization.

References

SHOWING 1-10 OF 11 REFERENCES

Learning Fair Classifiers: A Regularization-Inspired Approach

TLDR
A regularization-inspired approach for reducing bias in learned classifiers is presented and its ability to achieve both fairness and accuracy is evaluated, using the COMPAS scores data for prediction of recidivism.

Decoupled classifiers for fair and efficient machine learning

TLDR
This work provides a simple and efficient decoupling technique, that can be added on top of any black-box machine learning algorithm, to learn different classifiers for different groups, and shows that this method can apply to a range of fairness criteria.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fairness Constraints: Mechanisms for Fair Classification

TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

The Variational Fair Autoencoder

TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.

Censoring Representations with an Adversary

TLDR
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.

Discrimination-aware data mining

TLDR
This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.

Robust Text Classification in the Presence of Confounding Bias

TLDR
This paper considers the case where there is a confounding variable Z that influences both the text features X and the class variable Y and finds that covariate adjustment results in higher accuracy than competing baselines over a range of confounding relationships.

Chainer : a Next-Generation Open Source Framework for Deep Learning

TLDR
Chainer provides a flexible, intuitive, and high performance means of implementing a full range of deep learning models, including state-of-the-art models such as recurrent neural networks and variational autoencoders.

A multidisciplinary survey on discrimination analysis

TLDR
This survey is to provide a guidance and a glue for researchers and anti-discrimination data analysts on concepts, problems, application areas, datasets, methods, and approaches from a multidisciplinary perspective.