Person Tracking with a Mobile Robot based on Multi-Modal Anchoring

Abstract

The ability to robustly track a person is an important prerequisite for human-robot-interaction. This paper presents a hybrid approach for integrating vision and laser range data to track a human. The legs of a person can be extracted from laser range data while skin-colored faces are detectable in camera images showing the upper body part of a person. As these algorithms provide different percepts originating from the same person, the perceptual results have to be combined. We link the percepts to their symbolic counterparts legs and face by anchoring processes as defined by Coradeschi and Saffiotti. To anchor the composite symbol person we extend the anchoring framework with a fusion module integrating the individual anchors. This allows to deal with perceptual algorithms having different spatio-temporal properties and provides a structured way for integrating anchors from multiple modalities. An example with a mobile robot tracking a person demonstrates the performance of our approach.

8 Figures and Tables

Statistics

01020'03'05'07'09'11'13'15'17
Citations per Year

96 Citations

Semantic Scholar estimates that this publication has 96 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Kleinehagenbrock2002PersonTW, title={Person Tracking with a Mobile Robot based on Multi-Modal Anchoring}, author={Marcus Kleinehagenbrock}, year={2002} }