• Corpus ID: 71134

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

@inproceedings{Kendall2017WhatUD,
  title={What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?},
  author={Alex Kendall and Yarin Gal},
  booktitle={NIPS},
  year={2017}
}
There are two major types of uncertainty one can model. [] Key Method For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new…

Figures and Tables from this paper

Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
TLDR
This work proposes a comprehensive evaluation framework for scalable epistemic uncertainty estimation methods in deep learning and applies this framework to provide the first properly extensive and conclusive comparison of the two current state-of-the- art scalable methods: ensembling and MC-dropout.
Bayesian Uncertainty Quantification with Synthetic Data
TLDR
This paper conducts two sets of experiments to investigate the influence of distance, occlusion, clouds, rain, and puddles on the estimated uncertainty in the segmentation of road scenes and finds that the estimated aleatoric uncertainty from Bayesian deep models can be reduced with more training data.
A General Framework for Uncertainty Estimation in Deep Learning
TLDR
This work proposes a novel framework for uncertainty estimation of neural networks, based on Bayesian belief networks and Monte-Carlo sampling, which outperform previous methods by up to 23% in accuracy and has several desirable properties.
Learning the Distribution: A Unified Distillation Paradigm for Fast Uncertainty Estimation in Computer Vision
TLDR
A unified distillation paradigm for learning the conditional predictive distribution of a pre-trained dropout model for fast uncertainty estimation of both aleatoric and epistemic uncertainty at the same time is proposed.
Why Use Uncertainty for Self-supervised MVS ?
  • Computer Science
  • 2021
TLDR
The effect of uncertain supervision signals modeled by epistemic uncertainty is rethink as an attempt to increase the certain supervision signals in self-supervision, which is proved by extensive experiments to be effective.
Bayesian Deep Learning and Uncertainty in Computer Vision
TLDR
The importance of model calibration technique in the context of autonomous driving, which strengthens the reliability of the estimated uncertainty, and a distillation technique based on the Dirichlet distribution, which allows to estimate the uncertainties in real-time.
Uncertainty Estimation in Bayesian Neural Networks And Links to Interpretability
TLDR
This work proposes a method to visualise the contribution of individual features to predictive uncertainty, epistemic uncertainty (from the model weights), and aleatoric uncertainty (inherent in the input) using the CIFAR10 and ISIC2018 skin lesion diagnosis datasets.
Scalable Uncertainty for Computer Vision With Functional Variational Inference
TLDR
This work leverage the formulation of variational inference in function space, where Gaussian Processes are associated to both Bayesian CNN priors and variational family, and provides sufficient conditions for constructing regression loss functions whose probabilistic counterparts are compatible with aleatoric uncertainty quantification.
BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors
TLDR
Experiments performed on four common object detection datasets show that BayesOD provides uncertainty estimates that are better correlated with the accuracy of detections, manifesting as a significant reduction of 9.77%-13.13% on the minimum Gaussian uncertainty error metric.
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification
TLDR
A novel probabilistic deep learning model is proposed for the task of angular regression using von Mises distributions to predict a distribution over object pose angle and it is demonstrated on a number of challenging pose estimation datasets that the model produces calibrated probability predictions and competitive or superior point estimates compared to the current state of the art.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
TLDR
A practical system which is able to predict pixel-wise class labels with a measure of model uncertainty, and shows that modelling uncertainty improves segmentation performance by 2-3% across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation.
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
TLDR
This work presents an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches, and approximate the model's intractable posterior with Bernoulli variational distributions.
Deeper Depth Prediction with Fully Convolutional Residual Networks
TLDR
A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Uncertainty in Deep Learning
TLDR
This work develops tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation, and develops the theory for such tools.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network
TLDR
This paper employs two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally, and applies a scale-invariant error to help measure depth relations rather than scale.
A Practical Bayesian Framework for Backpropagation Networks
  • D. Mackay
  • Computer Science
    Neural Computation
  • 1992
TLDR
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks that automatically embodies "Occam's razor," penalizing overflexible and overcomplex models.
Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs
TLDR
This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF).
Discrete-Continuous Depth Estimation from a Single Image
TLDR
This paper forms monocular depth estimation as a discrete-continuous optimization problem, where the continuous variables encode the depth of the superpixels in the input image, and the discrete ones represent relationships between neighboring superPixels.
Fully Convolutional Networks for Semantic Segmentation
TLDR
It is shown that convolutional networks by themselves, trained end- to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation.
...
1
2
3
4
5
...