Dense visual SLAM for RGB-D cameras

  title={Dense visual SLAM for RGB-D cameras},
  author={Christian Kerl and J{\"u}rgen Sturm and Daniel Cremers},
  journal={2013 IEEE/RSJ International Conference on Intelligent Robots and Systems},
In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o… 

Figures and Tables from this paper

Dense RGB-D visual odometry using inverse depth
RGB-D dense SLAM with keyframe-based method
This paper proposes a new RGB-D dense SLAM system, which produces better results than the state-of-the-art systems in terms of the accuracy of the produced camera trajectories.
Dense Frame-to-Model SLAM with an RGB-D Camera
In this paper, a dense frame-to-model Simultaneous Localization And Mapping (SLAM) with an RGB-D camera is proposed, which achieves a more accurate trajectory in contrast to traditional
Edge Enhanced Direct Visual Odometry
Evaluations on real-world benchmark datasets show that the proposed RGB-D visual odometry method achieves competitive results in indoor scenes, especially in texture-less scenes where it outperforms the state-of-the-art algorithms.
Dense Visual SLAM with Probabilistic Surfel Map
The main idea is to maintain a globally consistent map with both photometric and geometric uncertainties encoded in order to address the inconsistency issue and enables generation of a high quality dense point cloud with comparable accuracy as the state-of-the-art approach.
Online 3D Reconstruction and 6-DoF Pose Estimation for RGB-D Sensors
This work proposes a novel keyframe selection scheme based on the Fisher information, and new loop closing method that utilizes feature-to-landmark correspondences inspired by image-based localization, which effectively mitigates drift that is frequently observed in visual odometry system.
A Micro SLAM System Based on ORB for RGB-D Cameras
A micro SLAM system based on ORB features for RGB-D cameras has been proposed that can be applied in small environment for localization and mapping and has proven to work well in these environments.
Approximate surface reconstruction and registration for RGB-D SLAM
This paper presents an efficient yet reliable approach to align pairs and sequences of RGB-D images that makes use of local surface information and is competitive with state-of-the-art approaches.
3D Reconstruction of Indoor Scenes Based on Feature and Graph Optimization
  • Wei-wei Yu, Hui Zhang
  • Computer Science
    2016 International Conference on Virtual Reality and Visualization (ICVRV)
  • 2016
An approach of visual SLAM (Simultaneous localization and mapping) system, in which the 3D scene models are built and the camera poses are estimated at the same time, using a robust global optimizer based on line processes.
Maximum clique based RGB-D visual odometry
This paper proposes a new feature-point based RGB-D visual odometry approach for estimating the relative camera motion from two consecutive frames, and designs a threshold technique to control the size of maximum clique.


An evaluation of the RGB-D SLAM system
We present an approach to simultaneous localization and mapping (SLAM) for RGB-D cameras like the Microsoft Kinect. Our system concurrently estimates the trajectory of a hand-held Kinect and
Robust odometry estimation for RGB-D cameras
This work registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error using non-linear minimization in combination with a coarse-to-fine scheme, and proposes to use a robust error function that reduces the influence of large residuals.
Integrating depth and color cues for dense multi-resolution scene mapping using RGB-D cameras
  • J. Stückler, Sven Behnke
  • Computer Science
    2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
  • 2012
This work proposes a novel method for acquiring 3D maps of indoor scenes from a freely moving RGB-D camera that integrates color and depth cues seamlessly in a multi-resolution map representation and proposes an efficient randomized loop-closure technique that is designed for on-line operation.
Real-time visual odometry from dense RGB-D images
An energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera is presented which is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.
Robust real-time visual odometry for dense RGB-D mapping
Extensions to the Kintinuous algorithm for spatially extended KinectFusion are described, incorporating the integration of multiple 6DOF camera odometry estimation methods for robust tracking and a novel GPU-based implementation of an existing dense RGB-D visual odometry algorithm.
Direct Iterative Closest Point for real-time visual odometry
It is shown how incorporating the depth measurement robustifies the cost function in case of insufficient texture information and non-Lambertian surfaces and in the Planetary Robotics Vision Ground Processing (PRoVisG) competition where visual odometry and 3D reconstruction results are solved for a stereo image sequence captured using a Mars rover.
A benchmark for the evaluation of RGB-D SLAM systems
A large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system is recorded for the evaluation of RGB-D SLAM systems.
Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera
A system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight, which enables 3D flight in cluttered environments using only onboard sensor data.
Scale Drift-Aware Large Scale Monocular SLAM
This paper describes a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input and presents a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures.