Closing the Recognition Loop: Recognizing and Searching for Objects in the Real World

Abstract

In 1966, psychologist J. J. Gibson wrote, ”we see because we move; we move because we see.” While the latter statement is often taken for granted, the former is important, as well. We have developed a robotic system that can find and identify objects whether or not they are initially visible and update knowledge of the locations of such objects. To that end, our system combines a number of key components: visual-inertial Structure from Motion with topological mapbuilding to localize the robot and map the environment, real-time occlusion detection to guide an efficient search of the environment, object recognition/categorization, and path planning/navigation to guide low-level control. This abstract will briefly describe the overall system and these components, with references to longer works for details. A real-time demo will be on-site at the workshop.

Cite this paper

@inproceedings{Meltzer2010ClosingTR, title={Closing the Recognition Loop: Recognizing and Searching for Objects in the Real World}, author={Jason Meltzer and Alberto Pretto and Brian Taylor and Stefano Soatto}, year={2010} }