Improved Techniques for Training GANs
- Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
- Computer ScienceNIPS
- 10 June 2016
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Improved Variational Inference with Inverse Autoregressive Flow
- Diederik P. Kingma, Tim Salimans, M. Welling
- Computer ScienceNIPS
- 15 June 2016
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors.
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
- Tim Salimans, Jonathan Ho, Xi Chen, Ilya Sutskever
- Computer ScienceArXiv
- 10 March 2017
This work explores the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients, and highlights several advantages of ES as a blackbox optimization technique.
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
- Tim Salimans, Diederik P. Kingma
- Computer ScienceNIPS
- 25 February 2016
A reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction is presented, improving the conditioning of the optimization problem and speeding up convergence of stochastic gradient descent.
PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications
- Tim Salimans, A. Karpathy, Xi Chen, Diederik P. Kingma
- Computer Science, Environmental ScienceInternational Conference on Learning…
- 19 January 2017
This work discusses the implementation of PixelCNNs, a recently proposed class of powerful generative models with tractable likelihood that contains a number of modifications to the original model that both simplify its structure and improve its performance.
Variational Lossy Autoencoder
- Xi Chen, Diederik P. Kingma, P. Abbeel
- Computer ScienceInternational Conference on Learning…
- 4 November 2016
This paper presents a simple but principled method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models such as RNN, MADE and PixelRNN/CNN with greatly improve generative modeling performance of VAEs.
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
- Chitwan Saharia, William Chan, Mohammad Norouzi
- Computer ScienceArXiv
- 23 May 2022
This work presents Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding, and finds that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
Dota 2 with Large Scale Deep Reinforcement Learning
- Christopher Berner, Greg Brockman, Susan Zhang
- Computer ScienceArXiv
- 13 December 2019
By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.
Markov Chain Monte Carlo and Variational Inference: Bridging the Gap
- Tim Salimans, Diederik P. Kingma, M. Welling
- Computer ScienceInternational Conference on Machine Learning
- 23 October 2014
A new synthesis of variational inference and Monte Carlo methods where one or more steps of MCMC is incorporated into the authors' variational approximation, resulting in a rich class of inference algorithms bridging the gap between variational methods and MCMC.
Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression
- Tim Salimans, David A. Knowles
- Computer ScienceArXiv
- 28 June 2012
A general algorithm for approximating nonstandard Bayesian posterior distributions that minimizes the Kullback-Leibler divergence of an approximating distribution to the intractable posterior distribu- tion.
...
...