Corpus ID: 215754835

Contrastive Examples for Addressing the Tyranny of the Majority

@article{Sharmanska2020ContrastiveEF,
  title={Contrastive Examples for Addressing the Tyranny of the Majority},
  author={V. Sharmanska and Lisa Anne Hendricks and Trevor Darrell and Novi Quadrianto},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.06524}
}
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better represented in the training data. This happens because of the generalization that classifiers have to make. It is simpler to fit the majority groups as this fit is more important to overall error. We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened, minorities become majorities and vice versa. We… Expand
2 Citations
Fair Attribute Classification through Latent Space De-biasing
  • 1
  • Highly Influenced
  • PDF

References

SHOWING 1-10 OF 55 REFERENCES
Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings
  • 79
  • PDF
Discovering Fair Representations in the Data Domain
  • 26
  • PDF
Diversity in Faces
  • 55
  • Highly Influential
  • PDF
Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network
  • 59
  • Highly Influential
  • PDF
Generative Adversarial Nets
  • 21,535
  • Highly Influential
  • PDF
Mitigating Unwanted Biases with Adversarial Learning
  • 310
  • Highly Influential
  • PDF
Age Progression/Regression by Conditional Adversarial Autoencoder
  • Zhifei Zhang, Yang Song, H. Qi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
  • 387
  • Highly Influential
  • PDF
Learning from Simulated and Unsupervised Images through Adversarial Training
  • 1,160
  • Highly Influential
  • PDF
VGGFace2: A Dataset for Recognising Faces across Pose and Age
  • 906
  • Highly Influential
  • PDF
Controllable Invariance through Adversarial Feature Learning
  • 121
  • Highly Influential
  • PDF
...
1
2
3
4
5
...