• Corpus ID: 18372371

Real-time dense appearance-based SLAM for RGB-D sensors

@inproceedings{Audras2011RealtimeDA,
  title={Real-time dense appearance-based SLAM for RGB-D sensors},
  author={C{\'e}dric Audras and Andrew I. Comport},
  year={2011}
}
In this work a direct dense approach is proposed for real-time RGB-D localisation and tracking. The direct RDB-D localisation approach is demonstrated on a low cost sensor which exploits projective IR light within indoor environments. This type of device has recently been the object of much interest and one advantage is that it provides dense 3D environment maps in real-time via embedded computation. To date all existing tracking approaches using these sensors have been based on a sparse set of… 

Figures and Tables from this paper

Tracking an RGB-D Camera Using Points and Planes
TLDR
This work presents a tracking algorithm for RGB-D cameras using both points and planes as primitives and shows how to extend the standard prediction-and-correction framework to include planes in addition to points.
A real-time RGB-D registration and mapping approach by heuristically switching between photometric and geometric information
TLDR
This paper proposes a novel informative sampling based geometric 3D feature extraction technique in which the points carrying the most useful geometric information are used for registration, which increases the computational speed significantly while preserving the accuracy of the registration when compared to using the dense point cloud for registration.
Semi-dense visual odometry for RGB-D cameras using approximate nearest neighbour fields
TLDR
A robust and efficient semidense visual odometry solution for RGB-D cameras that predominantly outperforms existing state-of-the-art methods is presented.
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
TLDR
A novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry, and uses a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from theRGB-D frame and use them as landmarks.
Scene structure registration for localization and mapping
Fast localization and 3D mapping using an RGB-D sensor
TLDR
Experimental results show how the proposed integrated framework is able to localize in real-time the device in an unknown environment and to simultaneously generate an environment dense and colored map.
Efficient compositional approaches for real-time robust direct visual odometry from RGB-D data
TLDR
An evaluation of different methods for computing frame-to-frame motion estimates for a moving RGB-D sensor by means of aligning two images using photometric error minimization in a general robust estimation framework, and shows how estimating global affine illumination changes, in general improves the performance of the algorithms.
Removing dynamic 3D objects from point clouds of a moving RGB-D camera
TLDR
This paper presents a solution to removing dynamic objects from RGB images and their corresponding depth images when a RGB-D camera is mounted on a mobile robot for visual SLAM.
A Keyframe-based Continuous Visual SLAM for RGB-D Cameras via Nonparametric Joint Geometric and Appearance Representation
TLDR
A novel keyframe-based continuous visual odometry that builds on the recently developed continuous sensor registration framework is presented that has better generalization across different training and validation sequences and is robust to the lack of texture and structure in the scene.
Real-time 3-D feature detection and correspondence refinement for indoor environment-mapping using RGB-D cameras
TLDR
In the proposed method, RGB images are first used to detect two-dimensional (2-D) sparse color features for estimating matched pairs between successive scanned depth images, and detected 2-D sparse features are mapped with their corresponding depth information.
...
...

References

SHOWING 1-10 OF 32 REFERENCES
Direct Iterative Closest Point for real-time visual odometry
TLDR
It is shown how incorporating the depth measurement robustifies the cost function in case of insufficient texture information and non-Lambertian surfaces and in the Planetary Robotics Vision Ground Processing (PRoVisG) competition where visual odometry and 3D reconstruction results are solved for a stereo image sequence captured using a Mars rover.
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
TLDR
This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.
Real-time Quadrifocal Visual Odometry
TLDR
A new image-based approach to tracking the six-degree-of-freedom trajectory of a stereo camera pair is described which directly uses all grayscale information available within the stereo pair (or stereo region) leading to very robust and precise results.
Real Time Localization and 3D Reconstruction
TLDR
A method that estimates the motion of a calibrated camera and the tridimensional geometry of the environment and the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence is described.
Accurate Quadrifocal Tracking for Robust 3D Visual Odometry
This paper describes a new image-based approach to tracking the 6DOF trajectory of a stereo camera pair using a corresponding reference image pairs instead of explicit 3D feature reconstruction of
Pose Estimation , Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture Stereo
TLDR
This paper presents an approach for detecting, tracking, and learning 3D articulation models for doors and drawers without using artificial markers, and uses a highly efficient and sampling-based approach to rectangle detection in dense depth images obtained from a self-developed projected texture stereo vision system.
Live dense reconstruction with a single moving camera
TLDR
This work takes point-based real-time structure from motion (SFM) as a starting point, generating accurate 3D camera pose estimates and a sparse point cloud and warp the base mesh into highly accurate depth maps based on view-predictive optical flow and a constrained scene flow update.
Visual odometry
  • D. Nistér, O. Naroditsky, J. Bergen
  • Computer Science, Mathematics
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
  • 2004
TLDR
A system that estimates the motion of a stereo head or a single moving camera based on video input in real-time with low delay and the motion estimates are used for navigational purposes.
Real-time markerless tracking for augmented reality: the virtual visual servoing framework
TLDR
In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach and has been validated on several complex image sequences including outdoor environments.
Appearance-based SLAM relying on a hybrid laser/omnidirectional sensor
TLDR
By combining the information from an omnidirectional camera and a laser range finder, reliable 3D positioning and an accurate 3D representation of the environment is obtained subject to illumination changes even in the presence of occluding and moving objects.
...
...