Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles

@article{Scaramuzza2008AppearanceGuidedMO,
  title={Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles},
  author={Davide Scaramuzza and Roland Y. Siegwart},
  journal={IEEE Transactions on Robotics},
  year={2008},
  volume={24},
  pages={1015-1026}
}
In this paper, we describe a real-time algorithm for computing the ego-motion of a vehicle relative to the road. The algorithm uses as input only those images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The first one is a homography-based tracker that detects and matches robust scale-invariant features that most likely belong to the ground plane. The second one uses an appearance-based approach and… 
Appearance-based monocular visual odometry for ground vehicles
  • Yang Yu, C. Pradalier, G. Zong
  • Computer Science
    2011 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)
  • 2011
TLDR
This paper presents a method for computing the visual odometry of ground vehicles by mounting a downward-looking camera on the vehicle and gives a comparison of the result between wheel odometry andVisual odometry.
Robust monocular visual odometry for road vehicles using uncertain perspective projection
TLDR
Evaluation both on the public KITTI benchmark and the own dataset show that this is a viable approach for visual odometry which outperforms basic 3D pose estimation due to the exploitation of the largely planar structure of road environments.
Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC
TLDR
It is shown that by exploiting the nonholonomic constraints of wheeled vehicles it is possible to use a restrictive motion model which allows us to parameterize the motion with only 1 feature correspondence, which results in the most efficient algorithms for removing outliers.
Scale-invariant and adaptive-search template matching for monocular visual odometry in low-textured environment
TLDR
A monocular VO system that uses a single downward-facing camera to estimate the relative position of a ground car-like vehicle at low-textured environments is presented, and the developed techniques and algorithms have high potential to be implemented in various commercial mobile robotic applications, which utilize VO for improved accuracy, efficiency, and cost effectiveness.
Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images
TLDR
This work presents a motion estimation based on a single omnidirectional camera, exploiting the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image.
Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images
TLDR
This work presents a motion estimation based on a single omnidirectional camera, exploiting the maximized horizontal field of view provided by this camera, which allows the estimation of the motion transformation between two poses to be incrementally computed.
Estimation of image scale variations in monocular visual odometry systems
TLDR
Indoor and outdoor experiments have proven the efficiency of the suggested technique in resolving image-scale uncertainty and ensuring an image- scale-invariant correlation-based matching, with only less than 5% additional computational time.
A Novel Filtering Approach in Visual Odometry for Autonomous Ground Vehicles Application
TLDR
It is shown that an IMM filtering can address the needs of this specific application, as the movement of a ground vehicle is different depending on different scenarios.
Visual ego motion estimation in urban environments based on U-V disparity
TLDR
A new method for 2D visual ego motion estimation in urban environments is presented, based on a stereo-vision system where the feature road points are tracked frame to frame in order to estimate the movement of the vehicle, avoiding outliers from dynamic obstacles.
Planar motion estimation using omnidirectional camera and laser rangefinder
TLDR
This paper proposes a method to estimate vehicle motion by the fusion of omnidirectional camera and laser rangefinder to overcome the drawbacks mentioned above.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Visual odometry for ground vehicle applications
TLDR
A system that estimates the motion of a stereo head, or a single moving camera, based on video input, in real time with low delay, and the motion estimates are used for navigational purposes.
Transforming camera geometry to a virtual downward-looking camera: robust ego-motion estimation and ground-layer detection
  • Qifa Ke, T. Kanade
  • Computer Science
    2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings.
  • 2003
TLDR
A robust method to solve two coupled problems, ground-layer detection and vehicle ego-motion estimation, which appear in visual navigation, byvirtually rotate the camera to the downward-looking pose, which will eliminate the ambiguity between rotational and translation ego- motion parameters and improve the Hessian matrix condition in the direct motion estimation process.
A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion
TLDR
Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible.
Visual odometry based on locally planar ground assumption
TLDR
Simultaneous translation and rotation are accurately measured by detecting and tracking features in image sequences by a method based on locally planar ground assumption and runs in real-time.
Omnidirectional visual odometry for a planetary rover
TLDR
Two methods of online visual odometry suited for planetary rovers are presented and compared, one based on robust estimation of optical flow and subsequent integration of the flow and the other a full structure-from-motion solution.
Mapping Large Loops with a Single Hand-Held Camera
This paper presents a method for Simultaneous Localization and Mapping (SLAM) relying on a monocular camera as the only sensor which is able to build outdoor, closedloop maps much larger than
Visual map-less navigation based on homographies
TLDR
The proposed method for autonomous robot navigation based on homographies computed between current image and images taken in a previous teaching phase with a monocular vision system has turned out to be specially useful to correct heading and lateral displacement, which are critical in systems based on odometry.
Real-time simultaneous localisation and mapping with a single camera
  • A. Davison
  • Computer Science
    Proceedings Ninth IEEE International Conference on Computer Vision
  • 2003
TLDR
This work presents a top-down Bayesian framework for single-camera localisation via mapping of a sparse set of natural features using motion modelling and an information-guided active measurement strategy, in particular addressing the difficult issue of real-time feature initialisation via a factored sampling approach.
Visual navigation using planar homographies
  • B. Liang, Nick E. Pears
  • Computer Science
    Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292)
  • 2002
TLDR
This work illustrates how, for pure translation, a homography can be computed from just two pairs of corresponding corner features, and shows how, in the case of general planar motion, homographies can be used to determine the rotation of the camera and robot.
Autocalibration from Planar Scenes
TLDR
The theory and a practical algorithm for the autocalibration of a moving projective camera, from m ≥ 5 views of a planar scene, which generalizes Hartley's method for the internal calibration of a rotating camera to allow camera translation and to provide 3D as well as calibration information.
...
1
2
3
4
...