• Corpus ID: 195317017

Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion

@inproceedings{Zhong2019DeepRC,
  title={Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion},
  author={Yiqi Zhong and Cho-Ying Wu and Suya You and Ulrich Neumann},
  booktitle={Neural Information Processing Systems},
  year={2019}
}
In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a… 

Figures and Tables from this paper

Learning an Efficient Multimodal Depth Completion Model

A light but efficient depth completion network is proposed, which consists of a two-branch global and local depth prediction module and a funnel convolutional spatial propagation network and can outperform some state-of-the-art methods with a lightweight architecture.

Adaptive Context-Aware Multi-Modal Network for Depth Completion

The proposed model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks, i.e., KITTI and NYU-v2, and at the same time has fewer parameters than latest models.

Deep Depth Completion from Extremely Sparse Data: A Survey

A comprehensive literature review of the related studies from the design aspects of network architectures, loss functions, benchmark datasets, and learning strategies with a proposal of a novel taxonomy that categorizes existing methods is provided.

Deep Depth Completion: A Survey

A comprehensive literature review of the related studies from the design aspects of network architectures, loss functions, benchmark datasets, and learning strategies with a proposal of a novel taxonomy that categorizes existing methods.

Sparse Depth Completion with Semantic Mesh Deformation Optimization

This work proposes a neural network with post-optimization, which takes an RGB image and sparse depth samples as input and predicts the complete depth map and makes three major contributions to advance the state-of-the-art: an improved backbone network architecture named EDNet, a semantic edge-weighted loss function and a semantic mesh deformation optimization method.

CostDCNet: Cost Volume Based Depth Completion for a Single RGB-D Image

This work proposes a novel depth completion framework, CostDCNet, based on the cost volume-based depth estimation approach that has been successfully employed for multi-view stereo (MVS), and demonstrates depth completion results comparable to or better than the state-of-the-art methods.

DAN‐Conv: Depth aware non‐local convolution for LiDAR depth completion

To efficiently process sparse depth input, a Depth Aware Non-local Convolution (DAN-Conv) is proposed, which augments the spatial sampling locations of a convolution operation.

Wasserstein Generative Adversarial Network for Depth Completion with Anisotropic Diffusion Depth Enhancement

This work used an adapted Wasserstein Generative Adversarial Network architecture instead of applying the traditional autoencoder approach and post-processing process to preserve valid depth measurements received from the input and further enhance the depth value precision of the results.

SelfDeco: Self-Supervised Monocular Depth Completion in Challenging Indoor Environments

We present a novel algorithm for self-supervised monocular depth completion. Our approach is based on training a neural network that requires only sparse depth measurements and corresponding

Object Detection on Single Monocular Images through Canonical Correlation Analysis

This report proposes a two-dimensional CCA(canonical correlation analysis) framework to fuse monocular images and corresponding predicted depth images for basic computer vision tasks like image classification and object detection and found that the proposed framework behaves better when taking predicteddepth images as inputs with the model trained from ground truth depth.

References

SHOWING 1-10 OF 51 REFERENCES

Deep Depth Completion of a Single RGB-D Image

A deep network is trained that takes an RGB image as input and predicts dense surface normals and occlusion boundaries, then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation.

Deeper Depth Prediction with Fully Convolutional Residual Networks

A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.

Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image

  • Fangchang MaS. Karaman
  • Computer Science
    2018 IEEE International Conference on Robotics and Automation (ICRA)
  • 2018
The use of a single deep regression network to learn directly from the RGB-D raw data is proposed, and the impact of number of depth samples on prediction accuracy is explored, to attain a higher level of robustness and accuracy.

DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion

A novel architecture that seeks to pull contextual cues separately from the intensity image and the depth features and then fuse them later in the network is proposed, which effectively exploits the relationship between the two modalities and produces accurate results while respecting salient image structures.

Plug-and-Play: Improve Depth Estimation via Sparse Data Propagation

A novel plug-and-play (PnP) module for improving depth prediction with taking arbitrary patterns of sparse depths as input, which requires no additional training and can be applied to practical applications such as leveraging both RGB and sparse LiDAR points to robustly estimate dense depth map.

CNN-SLAM: Real-Time Dense Monocular SLAM with Learned Depth Prediction

A method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM, based on a scheme that privileges depth prediction in image locations where monocularSLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa.

Unsupervised Monocular Depth Estimation with Left-Right Consistency

This paper proposes a novel training objective that enables the convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data, and produces state of the art results for monocular depth estimation on the KITTI driving dataset.

DFineNet: Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Input with RGB Guidance

This work proposes an end-to-end learning algorithm that is capable of using sparse, noisy input depth for refinement and depth completion and produces the camera pose as a byproduct, making it a great solution for autonomous systems.

Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation

This proposal efficiently learns sparse features without the need of an additional validity mask, and works with densities as low as 0.8% (8 layer lidar).

Dense Depth Posterior (DDP) From Single Image and Sparse Range

A deep learning system is presented to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar, using a Conditional Prior Network.
...