Corpus ID: 208267284

How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions

@inproceedings{Otterbacher2019HowDW,
  title={How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions},
  author={Jahna Otterbacher and Pinar Barlas and S. Kleanthous and K. Kyriakou},
  booktitle={AAAI 2019},
  year={2019}
}
Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this… Expand
5 Citations
To "See" is to Stereotype
  • 1
  • PDF
Does Exposure to Diverse Perspectives Mitigate Biases in Crowdwork? An Explorative Study
  • 1
  • PDF
Fairness in Algorithmic and Crowd-Generated Descriptions of People Images

References

SHOWING 1-10 OF 31 REFERENCES
Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images
  • 13
  • PDF
Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels
  • 120
  • PDF
Crowdsourcing Stereotypes: Linguistic Bias in Metadata Generated via GWAP
  • 13
  • PDF
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
  • 329
  • PDF
Social Cues, Social Biases: Stereotypes in Annotations on People Images
  • 3
Women also Snowboard: Overcoming Bias in Captioning Models
  • 151
  • PDF
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
  • 1,069
  • PDF
Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics (Extended Abstract)
  • 777
  • PDF
Understanding and predicting importance in images
  • 137
  • PDF
...
1
2
3
4
...