Visual information abstraction for interactive robot learning

Abstract

Semantic visual perception for knowledge acquisition plays an important role in human cognition, as well as in the learning process of any cognitive robot. In this paper, we present a visual information abstraction mechanism designed for continuously learning robotic systems. We generate spatial information in the scene by considering plane estimation and stereo line detection coherently within a unified probabilistic framework, and show how spaces of interest (SOIs) are generated and segmented using the spatial information. We also demonstrate how the existence of SOIs is validated in the long-term learning process. The proposed mechanism facilitates robust visual information abstraction which is a requirement for continuous interactive learning. Experiments demonstrate that with the refined spatial information, our approach provides accurate and plausible representation of visual objects.

10 Figures and Tables

Cite this paper

@article{Zhou2011VisualIA, title={Visual information abstraction for interactive robot learning}, author={Kai Zhou and Andreas Richtsfeld and Michael Zillich and Markus Vincze and Alen Vrecko and Danijel Skocaj}, journal={2011 15th International Conference on Advanced Robotics (ICAR)}, year={2011}, pages={328-334} }