• Publications
  • Influence
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
Semi-supervised Learning with Deep Generative Models
TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
Variational Inference with Normalizing Flows
TLDR
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
Normalizing Flows for Probabilistic Modeling and Inference
TLDR
This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
The Cramer Distance as a Solution to Biased Wasserstein Gradients
TLDR
This paper describes three natural properties of probability divergences that it believes reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients and proposes an alternative to the Wasserstein metric, the Cramer distance, which possesses all three desired properties.
Unsupervised Learning of 3D Structure from Images
TLDR
This paper learns strong deep generative models of 3D structures, and recovers these structures from 3D and 2D images via probabilistic inference, demonstrating for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
Variational Approaches for Auto-Encoding Generative Adversarial Networks
TLDR
This paper develops a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model, and describes a unified objective for optimization.
Implicit Reparameterization Gradients
TLDR
This work introduces an alternative approach to computing reparameterization gradients based on implicit differentiation and demonstrates its broader applicability by applying it to Gamma, Beta, Dirichlet, and von Mises distributions, which cannot be used with the classic reparametership trick.
Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step
TLDR
It is demonstrated that GANs are able to learn distributions in situations where the divergence minimization point of view predicts they would fail, and it is contributed to a growing body of evidence thatGAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each step.
...
1
2
3
4
5
...