Women also Snowboard: Overcoming Bias in Captioning Models

@inproceedings{Burns2018WomenAS,
  title={Women also Snowboard: Overcoming Bias in Captioning Models},
  author={Kaylee Burns and Lisa Anne Hendricks and Trevor Darrell and Anna Rohrbach},
  booktitle={ECCV},
  year={2018}
}
Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned… Expand
148 Citations

Figures, Tables, and Topics from this paper

Mitigating Gender Bias in Captioning Systems
  • 2
  • Highly Influenced
  • PDF
Detecting Gender Stereotypes: Lexicon vs. Supervised Learning Methods
  • 1
  • PDF
Assessing Social and Intersectional Biases in Contextualized Word Representations
  • 34
  • PDF
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles
  • 2
  • PDF
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases
  • 77
  • PDF
To "See" is to Stereotype
  • 1
  • PDF
Mitigating Gender Bias in Natural Language Processing: Literature Review
  • 85
  • Highly Influenced
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
  • 320
  • Highly Influential
  • PDF
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
  • 1,031
  • PDF
Mitigating Unwanted Biases with Adversarial Learning
  • 310
  • PDF
Show and tell: A neural image caption generator
  • 3,740
  • Highly Influential
  • PDF
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
  • 204
  • PDF
Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels
  • 118
  • PDF
ConvNets and ImageNet Beyond Accuracy: Explanations, Bias Detection, Adversarial Examples and Model Criticism
  • 38
  • PDF
Stereotyping and Bias in the Flickr30K Dataset
  • 44
  • PDF
Improving Smiling Detection with Race and Gender Diversity
  • 17
  • PDF
Contextual Action Recognition with R*CNN
  • 300
  • PDF
...
1
2
3
4
5
...