Baby Talk: Understanding and Generating Image Descriptions

Abstract

We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.

Extracted Key Phrases

05010020102011201220132014201520162017
Citations per Year

343 Citations

Semantic Scholar estimates that this publication has 343 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Kulkarni2011BabyTU, title={Baby Talk: Understanding and Generating Image Descriptions}, author={Girish Kulkarni and Visruth Premraj and Sagnik Dhar and Siming Li and Yejin Choi and Alexander C. Berg and Tamara L. Berg}, year={2011} }