Glow: Generative Flow with Invertible 1x1 Convolutions
@inproceedings{Kingma2018GlowGF, title={Glow: Generative Flow with Invertible 1x1 Convolutions}, author={Diederik P. Kingma and Prafulla Dhariwal}, booktitle={NeurIPS}, year={2018} }
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model…
Figures and Tables from this paper
1,456 Citations
MaCow: Masked Convolutional Generative Flow
- Computer ScienceNeurIPS
- 2019
MaCow is introduced, a simple yet effective architecture of generative flow using masked convolution, which achieves significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models.
Generative Flow via Invertible nxn Convolution
- Computer ScienceArXiv
- 2019
A novel invertible nxn convolution approach that overcomes the limitations of the invertable 1x1 convolution and uses fewer parameters than standard convolutions is proposed.
Regularized Autoencoders via Relaxed Injective Probability Flow
- Computer ScienceAISTATS
- 2020
A generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity is proposed, which provides another perspective on regularized autoencoders (RAEs), with the final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective.
TIONS VIA INVERTIBLE GENERATIVE FLOWS
- Computer Science
- 2021
This work demonstrates that with only architectural inductive biases, a generative model with a likelihood-based objective is capable of learning decoupled representations, requiring no explicit supervision.
Emerging Convolutions for Generative Normalizing Flows
- Computer ScienceICML
- 2019
The flexibility of d x d convolutions significantly improves the performance of generative flow models on galaxy images, CIFAR10 and ImageNet and is generalized to 1 x 1 convolutions proposed in Glow.
Flow-based Deep Generative Models
- Computer Science
- 2020
In this report, we investigate the flow-based deep generative models. We first compare different generative models, especially generative adversarial networks (GANs), variational autoencoders (VAEs)…
Generative Latent Flow
- Computer Science
- 2019
In this work, we propose the Generative Latent Flow (GLF), an algorithm for generative modeling of the data distribution. GLF uses an Auto-encoder (AE) to learn latent representations of the data,…
Distilling the Knowledge from Normalizing Flows
- Computer ScienceArXiv
- 2021
A positive answer to the question whether one can distill knowledge from flow-based models to more efficient alternatives is provided by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow- based models for image super-resolution and speech synthesis.
Distilling the Knowledge from Conditional Normalizing Flows
- Computer Science
- 2021
This work investigates whether one can distill flow-based models into more efficient alternatives and provides a positive answer by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow- based models for image super-resolution and speech synthesis.
Structure Search for Normalizing Flows
- Computer ScienceLWDA
- 2021
This work proposes a novel structure search approach based on an evolutionary optimization scheme to find conditional structures and can improve convergence on non-image datasets and lead to smaller models.
References
SHOWING 1-10 OF 29 REFERENCES
Density estimation using Real NVP
- Computer ScienceICLR
- 2017
This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.
Auto-Encoding Variational Bayes
- Computer ScienceICLR
- 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Improved Variational Inference with Inverse Autoregressive Flow
- Computer ScienceNIPS 2016
- 2017
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors.
Generative Adversarial Nets
- Computer ScienceNIPS
- 2014
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a…
NICE: Non-linear Independent Components Estimation
- Computer Science, MathematicsICLR
- 2015
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is…
Identity Mappings in Deep Residual Networks
- Computer ScienceECCV
- 2016
The propagation formulations behind the residual building blocks suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models
- Computer ScienceAAAI
- 2018
Flow-GANs is proposed, a generative adversarial network for which one can perform exact likelihood evaluation, thus supporting both adversarial and maximum likelihood training and demonstrating that hybrid training can attain high held-out likelihoods while retaining visual fidelity in the generated samples.
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
- Computer ScienceArXiv
- 2015
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories.
Image Transformer
- Computer ScienceICML
- 2018
This work generalizes a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood, and significantly increases the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Computer ScienceICLR
- 2018
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.