Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

Abstract

We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects – counting, locating and classifying the elements of a scene – without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network at unprecedented speed. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization.

Extracted Key Phrases

7 Figures and Tables

0204020162017
Citations per Year

fewer than 50 Citations

Semantic Scholar estimates that this publication has 50 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Eslami2016AttendIR, title={Attend, Infer, Repeat: Fast Scene Understanding with Generative Models}, author={S. M. Ali Eslami and Nicolas Heess and Theophane Weber and Yuval Tassa and David Szepesvari and Koray Kavukcuoglu and Geoffrey E. Hinton}, booktitle={NIPS}, year={2016} }