Null-sampling for Interpretable and Fair Representations

@inproceedings{Kehrenberg2020NullsamplingFI,
  title={Null-sampling for Interpretable and Fair Representations},
  author={Thomas Kehrenberg and Myles Bartlett and Oliver Thomas and Novi Quadrianto},
  booktitle={ECCV},
  year={2020}
}
We propose to learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness. Invariance implies a selectivity for high level, relevant correlations w.r.t. class label annotations, and a robustness to irrelevant correlations with protected characteristics such as race or gender. We introduce a non-trivial setup in which the training set exhibits a strong bias such that class label annotations are irrelevant and spurious correlations cannot be… Expand
Fair Normalizing Flows
TLDR
This work presents Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations and experiments demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning. Expand
Personalizing Pre-trained Models
TLDR
This work developed a technique, called Multi-label Weight Imprinting (MWI), for multi-label, continual, and few-shot learning, and CLIPPER (CLIP PERsonalized) uses image representations from CLIP, a large-scale image representation learning model trained using weak natural language supervision. Expand
Fair Representation: Guaranteeing Approximate Multiple Group Fairness for Unknown Tasks
TLDR
It is proved that, although fair representation might not guarantee fairness for all prediction tasks, it does guarantee fairnessFor an important subset of tasks—the tasks for which the representation is discriminative. Expand

References

SHOWING 1-10 OF 48 REFERENCES
Discovering Fair Representations in the Data Domain
TLDR
This work proposes to cast the problem ofpretability and fairness in computer vision and machine learning applications as data-to-data translation, i.e. learning a mapping from an input domain to a fair target domain, where a fairness definition is being enforced. Expand
Flexibly Fair Representation Learning by Disentanglement
TLDR
This work proposes an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also flexible fair, meaning they can be easily modified at test time to achieve subgroup demographic parity. Expand
Learning Fair Representations via an Adversarial Framework
TLDR
A minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar provides a theoretical guarantee with respect to statistical parity and individual fairness. Expand
Unsupervised Adversarial Invariance
TLDR
This work presents a novel unsupervised invariance induction framework for neural networks that learns a split representation of data through competitive training between the prediction task and a reconstruction task coupled with disentanglement, without needing any labeled information about nuisance factors or domain knowledge. Expand
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
TLDR
An adversarial training procedure is used to remove information about the sensitive attribute from the latent representation learned by a neural network, and the data distribution empirically drives the adversary's notion of fairness. Expand
Discovering Interpretable Representations for Both Deep Generative and Discriminative Models
TLDR
This work provides an interpretable lens for an existing model, and proposes two interpretability frameworks which rely on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. Expand
The Variational Fair Autoencoder
TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation that is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Censoring Representations with an Adversary
TLDR
This work forms the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer, and demonstrates the ability to provide discriminant free representations for standard test problems, and compares with previous state of the art methods for fairness. Expand
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to theExpand
...
1
2
3
4
5
...