Real-time visual odometry from dense RGB-D images

  title={Real-time visual odometry from dense RGB-D images},
  author={Frank Steinbr{\"u}cker and J{\"u}rgen Sturm and Daniel Cremers},
  journal={2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)},
We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. [] Key Method We then propose a linearization of the energy function which leads to a 6×6 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the…

Figures and Tables from this paper

Towards dense RGB-D visual odometry
Quantitative analysis shows that the solution is more robust to large camera motions than commonly adopted RGB-D approaches, allowing to perform visual odometry with a lower number of keyframes.
Visual odometry for RGB-D cameras for dynamic scenes
This paper uses image segmentation to better exclude the moving object portion of the scene from the stationary background, and dense pixel matching between the current and reference color images is performed to construct the 3D point cloud for dense motion estimation.
Robust odometry estimation for RGB-D cameras
This work registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error using non-linear minimization in combination with a coarse-to-fine scheme, and proposes to use a robust error function that reduces the influence of large residuals.
Plane-based Odometry using an RGB-D Camera
A novel approach for estimating the relative motion between successive RGB-D frames that uses plane-primitives instead of point features that is as accurate as state-of-the-art point-based approaches when the camera displacement is small, and significantly outperforms them in case of wide-baseline and/or dynamic foreground.
Fast visual odometry and mapping from RGB-D data
A novel uncertainty measure for sparse RGD-B features based on a Gaussian mixture model for the filtering stage and the registration algorithm is capable of closing small-scale loops in indoor environments online without any additional SLAM back-end techniques.
Dynamic RGB-D visual odometry
The experimental results show that the method has higher accuracy in dynamic environments and has considerable accuracy in static environments, and the accuracy of the method is more than 7 times higher than other RGB-D visual odometry algorithms.
Visual Odometry for RGB-D Cameras
A quick and accurate approach to visual odometry of a moving RGB-D camera navigating on a static environment surpassing in performance the algorithms ICP and SfM (Structure from Motion) in tests using a publicly available dataset.
Robust Real-time RGB-D Visual Odometry in Dynamic Environments via Rigid Motion Model
  • Sangil Lee, C. Son, H. Kim
  • Computer Science
    2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2019
A robust real-time visual odometry in dynamic environments via rigid-motion model updated by scene flow through spatial motion segmentation and temporal motion tracking is proposed.
Dense RGB-D visual odometry using inverse depth
Robust tracking and mapping with a handheld RGB-D camera
This paper introduces a robust orientation estimation based on quaternion method for initial sparse estimation and proposes a weighted ICP (Iterative Closest Point) method for better rate of convergence in optimization and accuracy in resulting trajectory.


Towards a benchmark for RGB-D SLAM evaluation
A large dataset containing RGB-D image sequences and the ground-truth camera trajectories is provided and an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems is proposed.
Visual odometry
  • D. Nistér, O. Naroditsky, J. Bergen
  • Computer Science, Mathematics
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
  • 2004
A system that estimates the motion of a stereo head or a single moving camera based on video input in real-time with low delay and the motion estimates are used for navigational purposes.
DTAM: Dense tracking and mapping in real-time
It is demonstrated that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application is shown.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.
Scale Drift-Aware Large Scale Monocular SLAM
This paper describes a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input and presents a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures.
Parallel Tracking and Mapping for Small AR Workspaces
A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Efficient variants of the ICP algorithm
  • S. Rusinkiewicz, M. Levoy
  • Computer Science
    Proceedings Third International Conference on 3-D Digital Imaging and Modeling
  • 2001
An implementation is demonstrated that is able to align two range images in a few tens of milliseconds, assuming a good initial guess, and has potential application to real-time 3D model acquisition and model-based tracking.
A Method for Registration of 3-D Shapes
A general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces based on the iterative closest point (ICP) algorithm.
In this paper we combine the Iterative Closest Point (’ICP’) and ‘point-to-plane ICP‘ algorithms into a single probabilistic framework. We then use this framework to model locally planar surface
Outdoor Mapping and Navigation Using Stereo Vision
This work considers the problem of autonomous navigation in an unstructured outdoor environment, and uses stereo vision as the main sensor to use more distant objects as landmarks for navigation, and to learn and use color and texture models of the environment.