Visual Homing From Scale With an Uncalibrated Omnidirectional Camera

Abstract

Visual homing enables a mobile robot to move to a reference position using only visual information. The approaches that we present in this paper utilize matched image key points (e.g., scale-invariant feature transform) that are extracted from an omnidirectional camera as inputs. First, we propose three visual homing methods that are based on feature scale, bearing, and the combination of both, under an image-based visual servoing framework. Second, considering computational cost, we propose a simplified homing method which takes an advantage of the scale information of key-point features to compute control commands. The observability and controllability of the algorithm are proved. An outlier rejection algorithm is also introduced and evaluated. The results of all these methods are compared both in simulations and experiments. We report the performance of all related methods on a series of commonly cited indoor datasets, showing the advantages of the proposed method. Furthermore, they are tested on a compact dataset of omnidirectional panoramic images, which is captured under dynamic conditions with ground truth for future research and comparison.

DOI: 10.1109/TRO.2013.2272251

15 Figures and Tables

051015201520162017
Citations per Year

Citation Velocity: 9

Averaging 9 citations per year over the last 3 years.

Learn more about how we calculate this metric in our FAQ.

Cite this paper

@article{Liu2013VisualHF, title={Visual Homing From Scale With an Uncalibrated Omnidirectional Camera}, author={Ming Liu and C{\'e}dric Pradalier and Roland Siegwart}, journal={IEEE Transactions on Robotics}, year={2013}, volume={29}, pages={1353-1365} }