Learn More
This paper has two goals. The first is to develop a variety of robust methods for the computation of the Fundamental Matrix, the calibration-free representation of camera motion. The methods are drawn from the principal categories of robust estimators, viz. case deletion diagnostics, M-estimators and random sampling, and the paper develops the theory(More)
MLESAC is an established algorithm for maximum-likelihood estimation by random sampling consensus, devised for computing multiview entities like the fundamental matrix from correspondences between image features. A shortcoming of the method is that it assumes that little is known about the prior probabilities of the validities of the correspondences. This(More)
— This paper presents a system which combines single-camera SLAM (Simultaneous Localization and Mapping) with established methods for feature recognition. Besides using standard salient image features to build an on-line map of the camera's environment, this system is capable of identifying and localizing known planar objects in the scene, and incorporating(More)
ÐAn active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the(More)
We show how a system for video-rate parallel camera tracking and 3D map-building can be readily extended to allow one or more cameras to work in several maps, separately or simultaneously. The ability to handle several thousand features per map at video-rate, and for the cameras to switch automatically between maps, allows spatially localized AR workcells(More)