• Publications
  • Influence
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
TLDR
A principled approach to multi-task deep learning is proposed which weighs multiple loss functions by considering the homoscedastic uncertainty of each task, allowing us to simultaneously learn various quantities with different units or scales in both classification and regression settings.
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
TLDR
This work applies a new variational inference based dropout technique in LSTM and GRU models, which outperforms existing techniques, and to the best of the knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank.
Uncertainty in Deep Learning
TLDR
This work develops tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation, and develops the theory for such tools.
Deep Bayesian Active Learning with Image Data
TLDR
This paper develops an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature, and demonstrates its active learning techniques with image data, obtaining a significant improvement on existing active learning approaches.
Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference
TLDR
This work presents an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches, and approximate the model's intractable posterior with Bernoulli variational distributions.
Concrete Dropout
TLDR
This work proposes a new dropout variant which gives improved performance and better calibrated uncertainties, and uses a continuous relaxation of dropout’s discrete masks to allow for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles.
Real Time Image Saliency for Black Box Classifiers
TLDR
A masking model is trained to manipulate the scores of the classifier by masking salient parts of the input image to generalise well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems.
BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning
We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points
...
1
2
3
4
5
...