Bayesian Deep Basis Fitting for Depth Completion with Uncertainty

  title={Bayesian Deep Basis Fitting for Depth Completion with Uncertainty},
  author={Chao Qu and Wenxin Liu and Camillo Jose Taylor},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
In this work we investigate the problem of uncertainty estimation for image-guided depth completion. We extend Deep Basis Fitting (DBF) [54] for depth completion within a Bayesian evidence framework to provide calibrated perpixel variance. The DBF approach frames the depth completion problem in terms of a network that produces a set of low-dimensional depth bases and a differentiable least squares fitting module that computes the basis weights using the sparse depths. By adopting a Bayesian… 
An Adaptive Framework for Learning Unsupervised Depth Completion
It is shown that regularization and co-visibility are related via the fitness (residual) of model to data and both can be unified into a single framework to improve the learning process.
Learning Topology From Synthetic Data for Unsupervised Depth Completion
We present a method for inferring dense depth maps from images and sparse depth measurements by leveraging synthetic data to learn the association of sparse point clouds with dense natural shapes,
Monitored Distillation for Positive Congruent Depth Completion
An adaptive knowledge distillation approach that yields a positive congruent training process, where a student model avoids learning the error modes of the teachers in order to leverage existing models that produce putative depth maps.
Unsupervised Depth Completion with Calibrated Backprojection Layers
  • A. Wong, Stefano Soatto
  • Computer Science
    2021 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2021
A deep neural network architecture to infer dense depth from an image and a sparse point cloud is proposed, which outperforms the state of the art by 30% indoor and 8% outdoor when the same camera is used for training and testing.
RigNet: Repetitive Image Guided Network for Depth Completion
This work explores a repetitive design in the authors' image guided network to sufficiently and gradually recover depth values and proposes an adaptive fusion mechanism to effectively aggregate multi-step depth features.
Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty Quantification
Conditional-Flow NeRF (CF-NeRF) is introduced, a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches and achieves significantly lower prediction errors and more reliable uncertainty values for synthetic novel view and depth-map estimation.
Dense Uncertainty Estimation
It is claimed that conventional deterministic neural network based dense prediction tasks are prone to overfitting, leading to over-confident predictions, which is undesirable for decision making and introduced how uncertainty estimation can be used for deep model calibration to achieve well-calibrated models, namely dense model calibration.


Depth Completion via Deep Basis Fitting
The proposed method replaces the final 1 × 1 convolutional layer employed in most depth completion networks with a least squares fitting module which computes weights by fitting the implicit depth bases to the given sparse depth measurements.
Depth Completion From Sparse LiDAR Data With Depth-Normal Constraints
A unified CNN framework is proposed that models the geometric constraints between depth and surface normal in a diffusion module and predicts the confidence of sparse LiDAR measurements to mitigate the impact of noise.
Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
  • Fangchang Ma, S. Karaman
  • Computer Science
    2018 IEEE International Conference on Robotics and Automation (ICRA)
  • 2018
The use of a single deep regression network to learn directly from the RGB-D raw data is proposed, and the impact of number of depth samples on prediction accuracy is explored, to attain a higher level of robustness and accuracy.
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.
Deep Depth Completion of a Single RGB-D Image
A deep network is trained that takes an RGB image as input and predicts dense surface normals and occlusion boundaries, then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation.
Dense Depth Posterior (DDP) From Single Image and Sparse Range
A deep learning system is presented to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar, using a Conditional Prior Network.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image
A deep learning architecture that produces accurate dense depth for the outdoor scene from a single color image and a sparse depth, which improves upon the state-of-the-art performance on KITTI depth completion benchmark.
Unsupervised Depth Completion From Visual Inertial Odometry
This work uses a predictive cross-modal criterion, akin to “self-supervision,” measuring photometric consistency across time, forward-backward pose consistency, and geometric compatibility with the sparse point cloud to infer dense depth from camera motion and sparse depth as estimated using a visual-inertial odometry system.
Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
This work proposes a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning and applies this framework to provide the first properly extensive and conclusive comparison of the two current state-of-the- art scalable methods: ensembling and MC-dropout.