CU-Net: LiDAR Depth-Only Completion With Coupled U-Net
@article{Wang2022CUNetLD, title={CU-Net: LiDAR Depth-Only Completion With Coupled U-Net}, author={Yufei Wang and Yuchao Dai and Qi Liu and Peng Yang and Jiadai Sun and Bo Li}, journal={IEEE Robotics and Automation Letters}, year={2022}, volume={7}, pages={11476-11483} }
LiDAR depth-only completion is a challenging task to estimate dense depth maps only from sparse measurement points obtained by LiDAR. Even though the depth-only methods have been widely developed, there is still a significant performance gap with the RGB-guided methods that utilize extra color images. We find that existing depth-only methods can obtain satisfactory results in the areas where the measurement points are almost accurate and evenly distributed (denoted as normal areas), while the…
Figures and Tables from this paper
One Citation
MFF-Net: Towards Efficient Monocular Depth Completion With Multi-Modal Feature Fusion
- Computer ScienceIEEE Robotics and Automation Letters
- 2023
This work proposes an efficient multi-modal feature fusion based depth completion framework (MFF-Net), which can efficiently extract and fuse features with different modals in both encoding and decoding processes, thus more depth details with better performance can be obtained.
References
SHOWING 1-10 OF 29 REFERENCES
A Surface Geometry Model for LiDAR Depth Completion
- Computer ScienceIEEE Robotics and Automation Letters
- 2021
A novel non-learning depth completion method is proposed by exploiting the local surface geometry that is enhanced by an outlier removal algorithm that is designed to remove incorrectly mapped LiDAR points from occluded regions.
Sparse and Noisy LiDAR Completion with RGB Guidance and Uncertainty
- Computer Science2019 16th International Conference on Machine Vision Applications (MVA)
- 2019
It is argued that simple depth completion does not require a deep network and a fusion method with RGB guidance from a monocular camera in order to leverage object information and to correct mistakes in the sparse input is proposed.
Robust Depth Completion with Uncertainty-Driven Loss Functions
- Computer ScienceAAAI
- 2022
This work introduces uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion, and proposes a multiscale joint prediction model that can simultaneously predict depth and uncertainty maps.
From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction
- Computer Science, Geology2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper formulate image reconstruction from sparse depth as an auxiliary task during training that is supervised by the unlabelled gray-scale images and shows that depth completion can be significantly improved via the auxiliary supervision of image reconstruction.
DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A deep learning architecture that produces accurate dense depth for the outdoor scene from a single color image and a sparse depth, which improves upon the state-of-the-art performance on KITTI depth completion benchmark.
HMS-Net: Hierarchical Multi-Scale Sparsity-Invariant Network for Sparse Depth Completion
- Computer ScienceIEEE Transactions on Image Processing
- 2020
A sparsity-invariant multi-scale encoder-decoder network (HMS-Net) for handling sparse inputs and sparse feature maps is proposed, and a model withoutRGB guidance ranks 1st among all peer-reviewed methods without using RGB information, and the model with RGB guidance ranks 2nd among all RGB-guided methods.
Distance Transform Pooling Neural Network for LiDAR Depth Completion.
- Computer ScienceIEEE transactions on neural networks and learning systems
- 2021
A recurrent distance transform pooling (DTP) module that aggregates multi-level nearby information prior to the backbone neural network to solve the sparsity challenge of recovery of dense depth maps from sparse depth sensors.
3D Packing for Self-Supervised Monocular Depth Estimation
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This work proposes a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos, which outperforms other self, semi, and fully supervised methods on the KITTI benchmark.
Learning Guided Convolutional Network for Depth Completion
- Computer ScienceIEEE Transactions on Image Processing
- 2021
Inspired by the guided image filtering, a novel guided network is designed to predict kernel weights from the guidance image and these predicted kernels are then applied to extract the depth image features.
Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation
- Computer Science2018 International Conference on 3D Vision (3DV)
- 2018
This proposal efficiently learns sparse features without the need of an additional validity mask, and works with densities as low as 0.8% (8 layer lidar).