Learn More
This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibration-free representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, M-estimators and random sampling, and the paper develops the theory(More)
MLESAC is an established algorithm for maximum-likelihood estimation by random sampling consensus, devised for computing multiview entities like the fundamental matrix from correspondences between image features. A shortcoming of the method is that it assumes that little is known about the prior probabilities of the validities of the correspondences. This(More)
We show how a system for video-rate parallel camera tracking and 3D map-building can be readily extended to allow one or more cameras to work in several maps, separately or simultaneously. The ability to handle several thousand features per map at video-rate, and for the cameras to switch automatically between maps, allows spatially localized AR workcells(More)
We present a general method for real-time, vision-only single-camera simultaneous localisation and mapping (SLAM) — an algorithm which is applicable to the locali-sation of any camera moving through a scene — and study its application to the localisation of a wearable robot with active vision. Starting from very sparse initial scene knowledge , a map of(More)