Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels

@article{Misra2016SeeingTT,
  title={Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels},
  author={Ishan Misra and C. L. Zitnick and Margaret Mitchell and Ross B. Girshick},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016},
  pages={2930-2939}
}
When human annotators are given a choice about what to label in an image, they apply their own subjective judgments on what to ignore and what to mention. We refer to these noisy "human-centric" annotations as exhibiting human reporting bias. Examples of such annotations include image tags and keywords found on photo sharing sites, or in datasets containing image captions. In this paper, we use these noisy annotations for learning visually correct image classifiers. Such annotations do not use… Expand
118 Citations
Approximating Human Judgment of Generated Image Quality
  • 2
  • PDF
What's in a Question: Using Visual Questions as a Form of Supervision
  • 13
  • PDF
Binary Image Selection (BISON): Interpretable Evaluation of Visual Grounding
  • 6
Cap2Det: Learning to Amplify Weak Caption Supervision for Object Detection
  • 7
  • PDF
RUBi: Reducing Unimodal Biases in Visual Question Answering
  • 58
  • PDF
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 77 REFERENCES
Watch and learn: Semi-supervised learning of object detectors from videos
  • 99
  • PDF
Understanding and predicting importance in images
  • 137
  • PDF
From captions to visual concepts and back
  • 990
  • PDF
Learning with Noisy Labels
  • 604
  • PDF
See No Evil, Say No Evil: Description Generation from Densely Labeled Images
  • 52
  • PDF
Learning with Annotation Noise
  • 60
  • PDF
Semi-Supervised Learning in Gigantic Image Collections
  • 264
  • PDF
Studying Relationships between Human Gaze, Description, and Computer Vision
  • 79
  • PDF
...
1
2
3
4
5
...