Learn More
Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes signiicantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy can not be assumed. The problem(More)
ÐImage mosaicing is commonly used to increase the visual field of view by pasting together many images or video frames. Existing mosaicing methods are based on projecting all images onto a predetermined single manifold: A plane is commonly used for a camera translating sideways, a cylinder is used for a panning camera, and a sphere is used for a camera(More)
A method for computing the 3D camera motion (the ego-motion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all eeects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax(More)
Computing camera rotation from image sequences can serve m a n y computer vision applications. One direct application is image stabilization , and when the camera rotation is known the computation of camera translation and 3D scene structure are much simpliied. A new approach for recovering camera rotation is presented in this paper, which p r o ves to be(More)
This paper presents a self-calibration and pose estimation method that uses two cameras which only diier by focal length. The estimation of the rotation and focal lengths is independent of the translation recovery. Unlike most methods, we do not initialize our recovery with the projective camera. Instead we estimate the ego-motion and calibration from 3(More)
  • 1