Parse geometry from a line: Monocular depth estimation with partial laser observation

@article{Liao2017ParseGF,
  title={Parse geometry from a line: Monocular depth estimation with partial laser observation},
  author={Yiyi Liao and Lichao Huang and Yue Wang and Sarath Kodagoda and Yinan Yu and Y. Liu},
  journal={2017 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2017},
  pages={5059-5066}
}
  • Yiyi Liao, Lichao Huang, Y. Liu
  • Published 17 October 2016
  • Computer Science
  • 2017 IEEE International Conference on Robotics and Automation (ICRA)
Many standard robotic platforms are equipped with at least a fixed 2D laser range finder and a monocular camera. Although those platforms do not have sensors for 3D depth sensing capability, knowledge of depth is an essential part in many robotics activities. Therefore, recently, there is an increasing interest in depth estimation using monocular images. As this task is inherently ambiguous, the data-driven estimated depth might be unreliable in robotics applications. In this paper, we have… 

Figures and Tables from this paper

Towards Real-Time Monocular Depth Estimation for Robotics: A Survey
TLDR
A comprehensive survey of MDE covering various methods is provided, the popular performance evaluation metrics and summarize publically available datasets are introduced and some promising directions for future research are presented.
Predicting Unobserved Space for Planning via Depth Map Augmentation
TLDR
This work presents an augmented planning system and investigates the effects of employing state-of-the-art depth completion techniques, specifically trained to augment sparse depth maps originating from RGB-D sensors, semi-dense methods and stereo matchers.
Depth Completion via Inductive Fusion of Planar LIDAR and Monocular Camera
TLDR
An inductive late-fusion block which better fuses different sensor modalities inspired by a probability model is introduced which shows promising results compared to previous approaches on both the benchmark datasets and simulated dataset with various 3D densities.
Inferring Depth Maps from 2-Dimensional Laser Ranging Data in a Simulated Environment
TLDR
This thesis explores the feasibility of inferring a full depth map from extremely sparse 2D LiDAR measurements via neural network from sparse input of 0.024% pixel density on input images and shows that the tested network infers shapes but struggles with blurry boundaries on objects.
Semi-supervised Depth Estimation from Sparse Depth and a Single Image for Dense Map Construction
TLDR
The main idea is to employ a set of new loss functions consisting of photometric reconstruction consistency, depth loss, nearby frame geometric consistency, and smoothness loss and propose a depth estimation network based on ResNet and show that the proposed method is superior the state-of-the-art methods on both raw LiDAR scans dataset and semi-dense annotation dataset.
Hallucinating Robots: Inferring Obstacle Distances from Partial Laser Measurements
TLDR
This work presents a method to estimate the distance to obstacles from richer sensor readings such as 3D lasers or RGBD sensors from raw 2D laser data, and qualitatively demonstrates in real time on a Care-O-bot 4 that the trained network can successfully infer obstacle distances from partial2D laser readings.
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR
TLDR
FusionDepth is proposed, a novel two-stage network to advance the self-supervised monocular dense depth learning by leveraging low-cost sparse LiDAR and an efficient feed-forward network designed to correct the errors in these initial depth maps in pseudo-3D space with real-time performance.
Deep Depth Estimation from Visual-Inertial SLAM
TLDR
This paper uses the available gravity estimate from the VI-SLAM to warp the input image to the orientation prevailing in the training dataset, which results in a significant performance gain for the surface normal estimate, and thus the dense depth estimates.
Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps
TLDR
This article introduces a novel densification method for depth maps, using the Hilbert Maps framework, and shows a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.
...
...

References

SHOWING 1-10 OF 22 REFERENCES
Fast robust monocular depth estimation for Obstacle Detection with fully convolutional networks
TLDR
This work proposes a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (~ 300Hz), without making assumptions on the type of motion.
Learning Depth from Single Monocular Images
TLDR
This work begins by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps, and applies supervised learning to predict the depthmap as a function of the image.
Multi-modal Auto-Encoders as Joint Estimators for Robotics Scene Understanding
TLDR
It is shown that suitably designed Multi-modal Auto-Encoders can solve the depth estimation and the semantic segmentation problems simultaneously, in the partial or even complete absence of some of the input modalities.
Estimating Depth From Monocular Images as Classification Using Deep Fully Convolutional Residual Networks
TLDR
By performing depth classification instead of regression, this paper can easily obtain the confidence of a depth prediction in the form of probability distribution and apply an information gain loss to make use of the predictions that are close to ground-truth during training, as well as fully-connected conditional random fields for post-processing to further improve the performance.
Are we ready for autonomous driving? The KITTI vision benchmark suite
TLDR
The autonomous driving platform is used to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection, revealing that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world.
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TLDR
This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields
TLDR
A deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF is presented, and a deep structured learning scheme which learns the unary and pairwise potentials of continuousCRF in a unified deep CNN framework is proposed.
A framework for multi-session RGBD SLAM in low dynamic workspace environment
The Stixel World - A Compact Medium Level Representation of the 3D-World
TLDR
The stixel-world turns out to be a compact but flexible representation of the three-dimensional traffic situation that can be used as the common basis for the scene understanding tasks of driver assistance and autonomous systems.
Image and Sparse Laser Fusion for Dense Scene Reconstruction
TLDR
The aim is to assign each image pixel with a range value using both image appearance and sparse laser data to reconstruct the metric geometry of a scene imaged with a single camera and a scanning laser.
...
...