• Corpus ID: 104292426

Block Neural Autoregressive Flow

@article{DeCao2019BlockNA,
  title={Block Neural Autoregressive Flow},
  author={Nicola De Cao and Ivan Titov and W. Aziz},
  journal={ArXiv},
  year={2019},
  volume={abs/1904.04676}
}
Normalising flows (NFS) map two density functions via a differentiable bijection whose Jacobian determinant can be computed efficiently. Recently, as an alternative to hand-crafted bijections, Huang et al. (2018) proposed neural autoregressive flow (NAF) which is a universal approximator for density functions. Their flow is a neural network (NN) whose parameters are predicted by another NN. The latter grows quadratically with the size of the former and thus an efficient technique for… 

Figures and Tables from this paper

Cubic-Spline Flows
TLDR
This work stacks a new coupling transform, based on monotonic cubic splines, with LU-decomposed linear layers, which retains an exact one-pass inverse, can be used to generate high-quality images, and closes the gap with autoregressive flows on a suite of density-estimation tasks.
Sinusoidal Flow: A Fast Invertible Autoregressive Flow
TLDR
The Sinusoidal Flow is proposed, a new type of normalising flows that inherits the expressive power and triangular Jacobian from fully autoregressive flows while guaranteed by Banach fixed-point theorem to remain fast invertible and thereby obviate the need for sequential inversion typically required in fully autOREgressive flows.
Gradient Boosted Flows
TLDR
Gradient Boosted Flows (GBF) model a variational posterior by successively adding new NF components by gradient boosting so that each new NF component is fit to the residuals of the previously trained components.
Unconstrained Monotonic Neural Networks
TLDR
This work proposes the Unconstrained Monotonic Neural Network (UMNN) architecture based on the insight that a function is monotonic as long as its derivative is strictly positive and demonstrates the ability of UMNNs to improve variational inference.
Neural Spline Flows
TLDR
This work proposes a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility, and demonstrates that neural spline flows improve density estimation, variational inference, and generative modeling of images.
ELF: Exact-Lipschitz Based Universal Density Approximator Flow
TLDR
A new Exact-Lipschitz Flow (ELF) is introduced that combines the ease of sampling from residual flows with the strong performance of autoregressive flows, and achieves state-of-the-art performance on multiple largescale datasets.
Learning Likelihoods with Conditional Normalizing Flows
TLDR
This work provides an effective method to train continuous CNFs for binary problems and applies them to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics.
Density estimation on low-dimensional manifolds: an inflation-deflation approach
TLDR
This paper inflates the data manifold by adding noise in the normal space, trains an NF on this inflated manifold and, finally, deflates the learned density, which allows using this method for approximating arbitrary densities on non-flat manifolds provided that the manifold dimension is known.
Quasi-Autoregressive Residual (QuAR) Flows
TLDR
This paper introduces a simplification to residual flows using a Quasi-Autoregressive (QuAR) approach, which retains many of the benefits of residual flows while dramatically reducing the compute time and memory requirements, thus making flow-based modeling approaches far more tractable and broadening their potential applicability.
Stochastic Neural Network with Kronecker Flow
TLDR
This work presents the Kronecker Flow, a generalization of the KrOnecker product to invertible mappings designed for stochastic neural networks, and applies this method to variational Bayesian neural networks on predictive tasks, PAC-Bayes generalization bound estimation, and approximate Thompson sampling in contextual bandits.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Neural Autoregressive Flows
TLDR
It is demonstrated that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions.
Improved Variational Inference with Inverse Autoregressive Flow
TLDR
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors.
NICE: Non-linear Independent Components Estimation
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is
FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models
TLDR
This paper uses Hutchinson's trace estimator to give a scalable unbiased estimate of the log-density and demonstrates the approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling.
Transformation Autoregressive Networks
TLDR
This work attempts to systematically characterize methods for density estimation, and proposes multiple novel methods to model non-Markovian dependencies, and introduces a novel data driven framework for learning a family of distributions.
MADE: Masked Autoencoder for Distribution Estimation
TLDR
This work introduces a simple modification for autoencoder neural networks that yields powerful generative models and proves that this approach is competitive with state-of-the-art tractable distribution estimators.
Sylvester Normalizing Flows for Variational Inference
TLDR
Sylvester normalizing flows remove the well-known single-unit bottleneck from planar flows, making a single transformation much more flexible, and are compared against planarflows and inverse autoregressive flows.
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
TLDR
A reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction is presented, improving the conditioning of the optimization problem and speeding up convergence of stochastic gradient descent.
Emerging Convolutions for Generative Normalizing Flows
TLDR
The flexibility of d x d convolutions significantly improves the performance of generative flow models on galaxy images, CIFAR10 and ImageNet and is generalized to 1 x 1 convolutions proposed in Glow.
Invertible Residual Networks
TLDR
The empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
...
1
2
3
4
...