• Publications
  • Influence
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference andExpand
Semi-supervised Learning with Deep Generative Models
TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning. Expand
Variational Inference with Normalizing Flows
TLDR
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference. Expand
DRAW: A Recurrent Neural Network For Image Generation
TLDR
The Deep Recurrent Attentive Writer neural network architecture for image generation substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye. Expand
Interaction Networks for Learning about Objects, Relations and Physics
TLDR
The interaction network is introduced, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system, and is implemented using deep neural networks. Expand
Conditional Neural Processes
TLDR
Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets. Expand
Variational Intrinsic Control
TLDR
This paper instantiates two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly, that provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. Expand
Normalizing Flows for Probabilistic Modeling and Inference
TLDR
This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning. Expand
Imagination-Augmented Agents for Deep Reinforcement Learning
TLDR
Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects, shows improved data efficiency, performance, and robustness to model misspecification compared to several baselines. Expand
A Probabilistic U-Net for Segmentation of Ambiguous Images
TLDR
A generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses and reproduces the possible segmentation variants as well as the frequencies with which they occur significantly better than published approaches. Expand
...
1
2
3
4
5
...