Fairness GAN
@article{Sattigeri2018FairnessG, title={Fairness GAN}, author={Prasanna Sattigeri and Samuel C. Hoffman and Vijil Chenthamarakshan and Kush R. Varshney}, journal={ArXiv}, year={2018}, volume={abs/1805.09910} }
In this paper, we introduce the Fairness GAN, an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in allocative decision making. We propose a novel auxiliary classifier GAN that strives for demographic parity or equality of opportunity and show empirical results on several datasets, including the CelebFaces Attributes (CelebA) dataset, the Quick, Draw!\ dataset, and a dataset of soccer player images…
31 Citations
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret
- Computer ScienceECCV
- 2020
This paper provides a detailed technical analysis and presents experiments demonstrating that various fairness measures can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.
A Maximal Correlation Framework for Fair Machine Learning
- Computer ScienceEntropy
- 2022
The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than existing algorithms.
A Maximal Correlation Approach to Imposing Fairness in Machine Learning
- Computer ScienceICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2022
The maximal correlation framework is introduced for expressing fairness constraints and shown to be capable of deriving regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms that are more computationally efficient than existing algorithms.
Local Data Debiasing for Fairness Based on Generative Adversarial Training
- Computer ScienceAlgorithms
- 2021
A novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes.
Adversarial training approach for local data debiasing
- Computer Science
- 2019
This work proposes a novel approach called GANsan whose objective is to prevent the possibility of any discrimination based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes.
Contrastive Examples for Addressing the Tyranny of the Majority
- Computer ScienceArXiv
- 2020
This work proposes to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened, minorities become majorities and vice versa, and shows that current generative adversarial networks are a powerful tool for learning data points, called contrastive examples.
Neural Styling for Interpretable Fair Representations
- Computer ScienceArXiv
- 2018
This paper provides the first approach to learn a highly unconstrained mapping from source to target by maximizing (conditional) dependence of residuals - the difference between data and its translated version - and protected characteristics.
Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure
- Computer ScienceAIES
- 2019
A novel, tunable algorithm for mitigating the hidden, and potentially unknown, biases within training data is developed that is generalizable across various data modalities and learning tasks and is used to address the issue of racial and gender bias in facial detection systems.
Information Removal at the bottleneck in Deep Neural Networks
- Computer ScienceBMVC
- 2022
This work proposes IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept ``private'' and the target.
Meta Balanced Network for Fair Face Recognition
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
A novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss such that the model optimized by this loss can perform fairly across people with different skin tones in face recognition.
References
SHOWING 1-10 OF 44 REFERENCES
Learning Fair Representations
- Computer ScienceICML
- 2013
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the…
A Reductions Approach to Fair Classification
- Computer ScienceICML
- 2018
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints.
Learning Adversarially Fair and Transferable Representations
- Computer ScienceICML
- 2018
This paper presents the first in-depth experimental demonstration of fair transfer learning and demonstrates empirically that the authors' learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.
Improved Training of Wasserstein GANs
- Computer ScienceNIPS
- 2017
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Equality of Opportunity in Supervised Learning
- Computer ScienceNIPS
- 2016
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Learning to Pivot with Adversarial Networks
- Computer ScienceNIPS
- 2017
This work introduces and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model and includes a hyperparameter to control the trade-off between accuracy and robustness.
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
- Computer ScienceArXiv
- 2017
An adversarial training procedure is used to remove information about the sensitive attribute from the latent representation learned by a neural network, and the data distribution empirically drives the adversary's notion of fairness.
From Parity to Preference-based Notions of Fairness in Classification
- Computer ScienceNIPS
- 2017
This paper draws inspiration from the fair-division and envy-freeness literature in economics and game theory and proposes preference-based notions of fairness -- any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups.
A Confidence-Based Approach for Balancing Fairness and Accuracy
- Computer ScienceSDM
- 2016
A new measure of fairness, called resilience to random bias (RRB), is proposed and demonstrated that RRB distinguishes well between the authors' naive and sensible fairness algorithms, and together with bias and accuracy provides a more complete picture of the fairness of an algorithm.
Censoring Representations with an Adversary
- Computer ScienceICLR
- 2016
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness.