Learn More
This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibration-free representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, M-estimators and random sampling, and the paper develops the theory(More)
We present a general method for real-time, vision-only single-camera simultaneous localisation and mapping (SLAM) — an algorithm which is applicable to the locali-sation of any camera moving through a scene — and study its application to the localisation of a wearable robot with active vision. Starting from very sparse initial scene knowledge , a map of(More)
MLESAC is an established algorithm for maximum-likelihood estimation by random sampling consensus, devised for computing multiview entities like the fundamental matrix from correspondences between image features. A shortcoming of the method is that it assumes that little is known about the prior probabilities of the validities of the correspondences. This(More)
— This paper presents a system which combines single-camera SLAM (Simultaneous Localization and Mapping) with established methods for feature recognition. Besides using standard salient image features to build an on-line map of the camera's environment, this system is capable of identifying and localizing known planar objects in the scene, and incorporating(More)
ÐAn active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the(More)