Image Captioning with Object Detection and Localization


Automatically generating a natural language description of an image is a task close to the heart of image understanding. In this paper, we present a multi-model neural network method closely related to the human visual system that automatically learns to describe the content of images. Our model consists of two sub-models: an object detection and localization model, which extract the information of objects and their spatial relationship in images respectively; Besides, a deep recurrent neural network (RNN) based on long short-term memory (LSTM) units with attention mechanism for sentences generation. Each word of the description will be automatically aligned to different objects of the input image when it is generated. This is similar to the attention mechanism of the human visual system. Experimental results on the COCO dataset showcase the merit of the proposed method, which outperform previous benchmark models.

4 Figures and Tables

Cite this paper

@article{Yang2017ImageCW, title={Image Captioning with Object Detection and Localization}, author={Zhongliang Yang and Yu-Jin Zhang and Sadaqat ur Rehman and Yongfeng Huang}, journal={CoRR}, year={2017}, volume={abs/1706.02430} }