Outdoor SLAM using visual appearance and laser ranging


This paper describes a 3D SLAM system using information from an actuated laser scanner and camera installed on a mobile robot. The laser samples the local geometry of the environment and is used to incrementally build a 3D point-cloud map of the workspace. Sequences of images from the camera are used to detect loop closure events (without reference to the internal estimates of vehicle location) using a novel appearance-based retrieval system. The loop closure detection is robust to repetitive visual structure and provides a probabilistic measure of confidence. The images suggesting loop closure are then further processed with their corresponding local laser scans to yield putative Euclidean image-image transformations. We show how naive application of this transformation to effect the loop closure can lead to catastrophic linearization errors and go on to describe a way in which gross, pre-loop closing errors can be successfully annulled. We demonstrate our system working in a challenging, outdoor setting containing substantial loops and beguiling, gently curving traversals. The results are overlaid on an aerial image to provide a ground truth comparison with the estimated map. The paper concludes with an extension into the multi-robot domain in which 3D maps resulting from distinct SLAM sessions (no common reference frame) are combined without recourse to mutual observation

DOI: 10.1109/ROBOT.2006.1641869

10 Figures and Tables

Citations per Year

346 Citations

Semantic Scholar estimates that this publication has 346 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Newman2006OutdoorSU, title={Outdoor SLAM using visual appearance and laser ranging}, author={Paul Newman and David M. Cole and Kin Leong Ho}, journal={Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006.}, year={2006}, pages={1180-1187} }