Stochastic Backpropagation and Approximate Inference in Deep Generative Models
- Danilo Jimenez Rezende, S. Mohamed, Daan Wierstra
- Computer ScienceInternational Conference on Machine Learning
- 16 January 2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
- I. Higgins, L. Matthey, Alexander Lerchner
- Computer ScienceInternational Conference on Learning…
- 4 November 2016
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial…
Semi-supervised Learning with Deep Generative Models
- Diederik P. Kingma, S. Mohamed, Danilo Jimenez Rezende, M. Welling
- Computer ScienceNIPS
- 20 June 2014
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
Variational Inference with Normalizing Flows
- Danilo Jimenez Rezende, S. Mohamed
- Computer Science, MathematicsInternational Conference on Machine Learning
- 21 May 2015
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.
Normalizing Flows for Probabilistic Modeling and Inference
- G. Papamakarios, Eric T. Nalisnick, Danilo Jimenez Rezende, S. Mohamed, Balaji Lakshminarayanan
- Computer ScienceJournal of machine learning research
- 5 December 2019
This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.
The Cramer Distance as a Solution to Biased Wasserstein Gradients
- Marc G. Bellemare, Ivo Danihelka, R. Munos
- Computer ScienceArXiv
- 30 May 2017
This paper describes three natural properties of probability divergences that it believes reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients and proposes an alternative to the Wasserstein metric, the Cramer distance, which possesses all three desired properties.
Implicit Reparameterization Gradients
- Michael Figurnov, S. Mohamed, A. Mnih
- Computer ScienceNeural Information Processing Systems
- 1 May 2018
This work introduces an alternative approach to computing reparameterization gradients based on implicit differentiation and demonstrates its broader applicability by applying it to Gamma, Beta, Dirichlet, and von Mises distributions, which cannot be used with the classic reparametership trick.
Variational Approaches for Auto-Encoding Generative Adversarial Networks
- Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, S. Mohamed
- Computer ScienceArXiv
- 15 June 2017
This paper develops a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model, and describes a unified objective for optimization.
A Clinically Applicable Approach to Continuous Prediction of Future Acute Kidney Injury
- Nenad Tomašev, Xavier Glorot, S. Mohamed
- MedicineNature
- 3 July 2019
A deep learning approach that predicts the risk of acute kidney injury and provides confidence assessments and a list of the clinical features that are most salient to each prediction, alongside predicted future trajectories for clinically relevant blood tests are developed.
Unsupervised Learning of 3D Structure from Images
- Danilo Jimenez Rezende, S. Eslami, S. Mohamed, P. Battaglia, Max Jaderberg, N. Heess
- Computer ScienceNIPS
- 1 July 2016
This paper learns strong deep generative models of 3D structures, and recovers these structures from 3D and 2D images via probabilistic inference, demonstrating for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
...
...