• Corpus ID: 49558198

Gradient Reversal Against Discrimination

@article{Raff2018GradientRA,
  title={Gradient Reversal Against Discrimination},
  author={Edward Raff and Jared Sylvester},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.00392}
}
No methods currently exist for making arbitrary neural networks fair. In this work we introduce GRAD, a new and simplified method to producing fair neural networks that can be used for auto-encoding fair representations or directly with predictive networks. It is easy to implement and add to existing architectures, has only one (insensitive) hyper-parameter, and provides improved individual and group fairness. We use the flexibility of GRAD to demonstrate multi-attribute protection. 

Figures and Tables from this paper

Can Active Learning Preemptively Mitigate Fairness Issues?

This paper study whether models trained with uncertainty-based AL heuristics such as BALD are fairer in their decisions with respect to a protected class than those trained with identically independently distributed (i.i.d.) sampling.

Adversarial Removal of Demographic Attributes Revisited

It is shown that a diagnostic classifier trained on the biased baseline neural network also does not generalize to new samples, indicating that it relies on correlations specific to their particular data sample.

Algorithmic fairness datasets: the story so far

This work surveys over two hundred datasets employed in algorithmic fairness research, and produces standardized and searchable documentation for each of them, rigorously identifying the three most popular fairness datasets, namely Adult, COMPAS, and German Credit, for which this unifying documentation effort supports multiple contributions.

The Accuracy Paradox of Algorithmic Classification

It is argued that any classification produces a marginalized group, namely those that are misclassified, and in tandem the ability of the affected to challenge the classification is diminished, paradoxically contradicting the promissory narrative of ’fixing’ algorithms through optimizing fairness and accuracy.

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

It is demonstrated on this dataset, for a number of facial attribute classification tasks, that the algorithm can be used to remove racial biases from the network feature representation.

SensitiveNets: Learning Agnostic Representations with Application to Face Images

A novel privacy-preserving neural network feature representation to suppress the sensitive information of a learned space while maintaining the utility of the data, based on an adversarial regularizer that introduces a sensitive information removal function in the learning objective.

Detecting and Preventing Shortcut Learning for Fair Medical AI using Shortcut Testing (ShorT)

This paper proposes a practical method for practitioners to assess and mitigate sho cut learning as a pa of the routine fairness assessment of clinical ML systems, and demonstrates its application to clinical tasks in radiology and dermatology.

Translating from Unfair to Fair Embeddings

Machine learning researchers often think of images as continuous data and language as discrete data with tokens representing lookup indices into an embedding table. However, in the social media

Discovering and Controlling for Latent Confounds in Text Classification Using Adversarial Domain Adaptation

The approach first uses neural network-based topic modeling to discover potential confounds that differ between training and testing data, then uses adversarial training to fit a classification model that is invariant to these discovered confounds.

SensitiveNets: Learning Agnostic Representations with Application to Face Recognition

This work proposes a new neural network feature representation that help to leave out sensitive information in the decision-making process of pattern recognition and machine learning algorithms based on a triplet loss learning generalization.

References

SHOWING 1-10 OF 11 REFERENCES

Learning Fair Classifiers: A Regularization-Inspired Approach

A regularization-inspired approach for reducing bias in learned classifiers is presented and its ability to achieve both fairness and accuracy is evaluated, using the COMPAS scores data for prediction of recidivism.

Decoupled classifiers for fair and efficient machine learning

This work provides a simple and efficient decoupling technique, that can be added on top of any black-box machine learning algorithm, to learn different classifiers for different groups, and shows that this method can apply to a range of fairness criteria.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fairness Constraints: Mechanisms for Fair Classification

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

The Variational Fair Autoencoder

This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.

Censoring Representations with an Adversary

This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.

Discrimination-aware data mining

This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.

Robust Text Classification in the Presence of Confounding Bias

This paper considers the case where there is a confounding variable Z that influences both the text features X and the class variable Y and finds that covariate adjustment results in higher accuracy than competing baselines over a range of confounding relationships.

Chainer : a Next-Generation Open Source Framework for Deep Learning

Chainer provides a flexible, intuitive, and high performance means of implementing a full range of deep learning models, including state-of-the-art models such as recurrent neural networks and variational autoencoders.

A multidisciplinary survey on discrimination analysis

This survey is to provide a guidance and a glue for researchers and anti-discrimination data analysts on concepts, problems, application areas, datasets, methods, and approaches from a multidisciplinary perspective.