A benchmark for the evaluation of RGB-D SLAM systems

@article{Sturm2012ABF,
  title={A benchmark for the evaluation of RGB-D SLAM systems},
  author={J{\"u}rgen Sturm and Nikolas Engelhard and Felix Endres and Wolfram Burgard and Daniel Cremers},
  journal={2012 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year={2012},
  pages={573-580}
}
In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The… 

Figures and Tables from this paper

The TUM VI Benchmark for Evaluating Visual-Inertial Odometry
TLDR
The TUM VI benchmark is proposed, a novel dataset with a diverse set of sequences in different scenes for evaluatingVI odometry, which provides camera images with 1024×1024 resolution at 20 Hz, high dynamic range and photometric calibration, and evaluates state-of-the-art VI odometry approaches on this dataset.
The VCU-RVI Benchmark: Evaluating Visual Inertial Odometry for Indoor Navigation Applications with an RGB-D Camera
  • He Zhang, Lingqiu Jin, C. Ye
  • Computer Science
    2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2020
TLDR
This paper presents VCU-RVI, a new visual inertial odometry (VIO) benchmark with a set of diverse data sequences in different indoor scenarios, and conducts experiments to evaluate the state-of-the-art VIO algorithms using this benchmark.
A Photometrically Calibrated Benchmark For Monocular Visual Odometry
TLDR
A novel, simple approach to non-parametric vignette calibration, which requires minimal set-up and is easy to reproduce and thoroughly evaluate two existing methods (ORB-SLAM and DSO) on the dataset.
Evaluating Egomotion and Structure-from-Motion Approaches Using the TUM RGB-D Benchmark
TLDR
The TUM RGB-D benchmark for visual odometry and SLAM evaluation is presented and the evaluation results of the first users from outside the group are discussed and briefly summarized.
Adaptive algorithm for the SLAM design with a RGB-D camera
TLDR
This work presents a visual featurebased SLAM, which is able to produce high-quality tridimensional maps in real time with a low-cost RGB-D camera such as the Microsoft Kinect and can robustly deal with challenging scenarios while being fast enough for online applications.
Camera Rig Extrinsic Calibration Using a Motion Capture System
  • S. Chiodini, M. Pertile, S. Debei
  • Computer Science
    2018 5th IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace)
  • 2018
TLDR
This work presents a calibration procedure to estimate the transformation between the reference frame tracked by a motion capture system and the optical reference frame of a stereo camera, and shows its ability to reach a millimeter accuracy.
Large Scale 3D Mapping of Indoor Environments Using a Handheld RGBD Camera
TLDR
Through several experiments in environments with varying sizes and construction it is shown that this method reduces rotational and translational drift significantly without performing any loop closing techniques.
CoRBS: Comprehensive RGB-D benchmark for SLAM using Kinect v2
TLDR
This novel benchmark allows for the first time to independently evaluate the localization as well as the mapping part of RGB-D SLAM systems with real data and provides the combination of real depth and color data together with a ground truth trajectory of the camera and a 3D model of the scene.
Benchmarking and Comparing Popular Visual SLAM Algorithms
TLDR
This paper contains the performance analysis and benchmarking of two popular visual SLAM Algorithms: RGBD-SLAM and RTABMap and points out some underlying flaws in the used evaluation metrics.
Towards dense RGB-D visual odometry
TLDR
Quantitative analysis shows that the solution is more robust to large camera motions than commonly adopted RGB-D approaches, allowing to perform visual odometry with a lower number of keyframes.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 54 REFERENCES
Towards a benchmark for RGB-D SLAM evaluation
TLDR
A large dataset containing RGB-D image sequences and the ground-truth camera trajectories is provided and an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems is proposed.
An evaluation of the RGB-D SLAM system
We present an approach to simultaneous localization and mapping (SLAM) for RGB-D cameras like the Microsoft Kinect. Our system concurrently estimates the trajectory of a hand-held Kinect and
Real-time visual odometry from dense RGB-D images
TLDR
An energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera is presented which is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude.
KinectFusion: Real-time dense surface mapping and tracking
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.
Scale Drift-Aware Large Scale Monocular SLAM
TLDR
This paper describes a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input and presents a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures.
Parallel Tracking and Mapping for Small AR Workspaces
TLDR
A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.
Are we ready for autonomous driving? The KITTI vision benchmark suite
TLDR
The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
TLDR
This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.
Real-time dense appearance-based SLAM for RGB-D sensors
TLDR
In this work a direct dense approach is proposed for real-time RGB-D localisation and tracking such that an error is based directly on the intensity of pixels so that no feature extraction and matching are required.
Tracking a depth camera: Parameter exploration for fast ICP
TLDR
This paper proposes a state-of-the-art, modular, and efficient implementation of an ICP library, and shows the modularity of this library by optimizing the use of lean and simple descriptors in order to ease the matching of 3D point clouds.
...
1
2
3
4
5
...