• Corpus ID: 244728383

Online Mutual Adaptation of Deep Depth Prediction and Visual SLAM

  title={Online Mutual Adaptation of Deep Depth Prediction and Visual SLAM},
  author={Shing Yan Loo and Moein Shakeri and Sai Hong Tang and Syamsiah Mashohor and Hong Zhang},
The ability of accurate depth prediction by a CNN is a major challenge for its wide use in practical visual SLAM applications, such as enhanced camera tracking and dense mapping. This paper is set out to answer the following question: Can we tune a depth prediction CNN with the help of a visual SLAM algorithm even if the CNN is not trained for the current operating environment in order to benefit the SLAM performance? To this end, we propose a novel online adaptation framework consisting of two… 

An Overview on Visual SLAM: From Tradition to Semantic

This paper introduces the development of VSLAM technology from two aspects: traditional V SLAM and semantic VSLam combined with deep learning, and focuses on the developmentof semantic V SLam based on deep learning.

HDPV-SLAM: Hybrid Depth-augmented Panoramic Visual SLAM for Mobile Mapping System with Tilted LiDAR and Panoramic Visual Camera

A hybrid depth association module that optimally combines depth information estimated by two independent procedures, feature-based triangulation and depth estimation, which surpasses that of the state-of-the-art (SOTA) SLAM systems.

CoVIO: Online Continual Learning for Visual-Inertial Odometry

This work introduces CoVIO for online continual learning of visual-inertial odometry and proposes a novel sampling strategy to maximize image diversity in a fixed-size replay buffer that targets the limited storage capacity of embedded devices.



Real-Time Dense Monocular SLAM With Online Adapted Depth Prediction Network

Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that the real-time dense monocular SLAM system outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error.

Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction

This paper proposes a joint narrow and wide baseline based self-improving framework, where on the one hand the CNN-predicted depth is leveraged to perform pseudo RGB-D feature-based SLAM, leading to better accuracy and robustness than the monocular RGB SLAM baseline.

CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction

This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a single-image depth prediction network and evaluating the method with two outdoor datasets.

DeepRelativeFusion: Dense Monocular SLAM using Single-Image Relative Depth Prediction

This paper proposes a dense monocular SLAM system, named DeepRelativeFusion, that is capable to recover a globally consistent 3D structure and outperforms the state-of-the-art dense SLAM systems quantitatively in dense reconstruction accuracy by a large margin.

D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry

D3VO tightly incorporates the predicted depth, pose and uncertainty into a direct visual odometry method to boost both the front-end tracking as well as the back-end non-linear optimization.

Unsupervised Monocular Depth Estimation with Left-Right Consistency

This paper proposes a novel training objective that enables the convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data, and produces state of the art results for monocular depth estimation on the KITTI driving dataset.

LSD-SLAM: Large-Scale Direct Monocular SLAM

A novel direct tracking method which operates on \(\mathfrak{sim}(3)\), thereby explicitly detecting scale-drift, and an elegant probabilistic solution to include the effect of noisy depth values into tracking are introduced.

Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry

The Deep Virtual Stereo Odometry incorporates deep depth predictions into Direct Sparse Odometry (DSO) as direct virtual stereo measurements and designs a novel deep network that refines predicted depth from a single image in a two-stage process.

CoMoDA: Continuous Monocular Depth Adaptation Using Past Experiences

This paper proposes a novel self-supervised Continuous Monocular Depth Adaptation method (CoMoDA), which adapts the pretrained model on a test video on the fly and achieves state-of-the-art depth estimation performance and surpass all existing methods using standard architectures.

Digging Into Self-Supervised Monocular Depth Estimation

It is shown that a surprisingly simple model, and associated design choices, lead to superior predictions, and together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.