Predicting Unobserved Space for Planning via Depth Map Augmentation
@article{Fehr2019PredictingUS, title={Predicting Unobserved Space for Planning via Depth Map Augmentation}, author={Marius Fehr and Tim Taubner and Yang Liu and Roland Y. Siegwart and C{\'e}sar Cadena}, journal={2019 19th International Conference on Advanced Robotics (ICAR)}, year={2019}, pages={30-36} }
Safe and efficient path planning is crucial for autonomous mobile robots. A prerequisite for path planning is to have a comprehensive understanding of the 3D structure of the robot's environment. On Micro Air Vehicles (MAVs) this is commonly achieved using low-cost sensors, such as stereo or RGB-D cameras. These sensors may fail to provide depth measurements in textureless or IR-absorbing areas and have limited effective range. In path planning, this results in inefficient trajectories or…
Figures and Tables from this paper
2 Citations
Efficient Volumetric Mapping Using Depth Completion With Uncertainty for Robotic Navigation
- Computer ScienceArXiv
- 2020
This work introduces a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images and exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping.
Volumetric Occupancy Mapping With Probabilistic Depth Completion for Robotic Navigation
- Computer ScienceIEEE Robotics and Automation Letters
- 2021
This work introduces a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images and exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping.
References
SHOWING 1-10 OF 37 REFERENCES
Parse geometry from a line: Monocular depth estimation with partial laser observation
- Computer Science2017 IEEE International Conference on Robotics and Automation (ICRA)
- 2017
This paper constructs a dense reference map from the sparse laser range data, redefining the depth estimation task as estimating the distance between the real and the reference depth, and constructs a novel residual of residual neural network, and tightly combine the classification and regression losses for continuous depth estimation.
Voxblox: Incremental 3D Euclidean Signed Distance Fields for on-board MAV planning
- Computer Science2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2017
This work proposes a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision, and shows that it can build TSDFs faster than Octomaps, and that it is more accurate than occupancy maps.
Fusion of Stereo and Still Monocular Depth Estimates in a Self-Supervised Learning Context
- Computer Science2018 IEEE International Conference on Robotics and Automation (ICRA)
- 2018
It is shown that the fused estimates lead to a higher performance than the stereo vision estimates alone, showing that even rather limited CNNs can help provide stereo vision equipped robots with more reliable depth maps for autonomous navigation.
Deep Neural Network for Real-Time Autonomous Indoor Navigation
- Computer ScienceArXiv
- 2015
A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot's choice of action, and a practical system in which a quadcopter autonomously navigates indoors and finds a specific target by using a single camera.
Receding Horizon "Next-Best-View" Planner for 3D Exploration
- Business2016 IEEE International Conference on Robotics and Automation (ICRA)
- 2016
A novel path planning algorithm for the autonomous exploration of unknown space using aerial robotic platforms that employs a receding horizon “next-best-view” scheme and its good scaling properties enable the handling of large scale and complex problem setups.
Maplab: An Open Framework for Research in Visual-Inertial Mapping and Localization
- Computer ScienceIEEE Robotics and Automation Letters
- 2018
Robust and accurate visual-inertial estimation is crucial to many of today's challenges in robotics. Being able to localize against a prior map and obtain accurate and drift-free pose estimates can…
Deeper Depth Prediction with Fully Convolutional Residual Networks
- Computer Science2016 Fourth International Conference on 3D Vision (3DV)
- 2016
A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera
- Computer Science2019 International Conference on Robotics and Automation (ICRA)
- 2019
A deep regression model is developed to learn a direct mapping from sparse depth (and color images) input to dense depth prediction and a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels is proposed.
Learning monocular reactive UAV control in cluttered natural environments
- Computer Science2013 IEEE International Conference on Robotics and Automation
- 2013
A system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments using only a single cheap camera to perceive the environment, and using recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading.
Incremental micro-UAV motion replanning for exploring unknown environments
- Computer Science2013 IEEE International Conference on Robotics and Automation
- 2013
A graph search method that, despite the high dimensionality of the problem, is capable of generating dynamically feasible motions in real-time, enabled by leveraging the differential flatness property of the system and developing a structured search space based on state lattice motion primitives.