Corpus ID: 233231643

VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough

@article{Min2021VOLDORSLAMFT,
  title={VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough},
  author={Zhixiang Min and Enrique Dunn},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.06800}
}
We present a dense-indirect SLAM system using external dense optical flows as input. We extend the recent probabilistic visual odometry model VOLDOR [1], by incorporating the use of geometric priors to 1) robustly bootstrap estimation from monocular capture, while 2) seamlessly supporting stereo and/or RGB-D input imagery. Our customized back-end tightly couples our intermediate geometric estimates with an adaptive priority scheme managing the connectivity of an incremental pose graph. We… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 36 REFERENCES
VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals
TLDR
A dense indirect visual odometry method taking as input externally estimated optical flow fields instead of hand-crafted feature correspondences, which generalizes well to different state-of-the-art optical flow methods, making the approach modular and agnostic to the choice of optical flow estimators. Expand
ElasticFusion: Dense SLAM Without A Pose Graph
TLDR
This system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. Expand
Dense visual SLAM for RGB-D cameras
TLDR
This paper proposes a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels, and proposes an entropy-based similarity measure for keyframe selection and loop closure detection. Expand
LSD-SLAM: Large-Scale Direct Monocular SLAM
TLDR
A novel direct tracking method which operates on \(\mathfrak{sim}(3)\), thereby explicitly detecting scale-drift, and an elegant probabilistic solution to include the effect of noisy depth values into tracking are introduced. Expand
Direct Sparse Odometry
TLDR
The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness. Expand
GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
TLDR
An adaptive geometric consistency loss is proposed to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively and achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones. Expand
ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras
TLDR
ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities, is presented, being in most cases the most accurate SLAM solution. Expand
BAD SLAM: Bundle Adjusted Direct RGB-D SLAM
TLDR
A novel, fast direct BA formulation is presented which is implemented in a real-time dense RGB-D SLAM algorithm, and the proposed algorithm outperforms all other evaluated SLAM methods. Expand
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
TLDR
This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models, which is as robust as the best systems available in the literature, and significantly more accurate. Expand
Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme
TLDR
This paper proposes a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images which is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. Expand
...
1
2
3
4
...