• Publications
  • Influence
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
We develop a theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. Expand
  • 2,920
  • 514
  • PDF
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. Expand
  • 1,450
  • 215
  • PDF
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
TLDR
We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. Expand
  • 1,127
  • 116
  • PDF
Uncertainty in Deep Learning
TLDR
In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. Expand
  • 700
  • 113
  • PDF
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
TLDR
We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. Expand
  • 829
  • 95
  • PDF
Deep Bayesian Active Learning with Image Data
TLDR
We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Expand
  • 546
  • 90
  • PDF
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
TLDR
We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. Expand
  • 395
  • 75
  • PDF
Concrete Dropout
TLDR
We propose a new dropout variant which gives improved performance and better calibrated uncertainties in large vision models and reinforcement learning. Expand
  • 246
  • 40
  • PDF
Real Time Image Saliency for Black Box Classifiers
TLDR
In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. Expand
  • 242
  • 33
  • PDF
Distributed Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models
TLDR
We introduce a novel re-parametrisation of variational inference for sparse GP regression and latent variable models that allows for an efficient distributed algorithm for the models able to process datasets with millions of points. Expand
  • 123
  • 18
  • PDF
...
1
2
3
4
5
...