Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

Abstract

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-theart performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

View Slides

Extracted Key Phrases

5 Figures and Tables

02004006002014201520162017
Citations per Year

1,071 Citations

Semantic Scholar estimates that this publication has 1,071 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Xu2015ShowAA, title={Show, Attend and Tell: Neural Image Caption Generation with Visual Attention}, author={Kelvin Xu and Jimmy Ba and Ryan Kiros and Kyunghyun Cho and Aaron C. Courville and Ruslan Salakhutdinov and Richard S. Zemel and Yoshua Bengio}, booktitle={ICML}, year={2015} }