• Corpus ID: 220486988

Variational Inference with Continuously-Indexed Normalizing Flows

@inproceedings{Caterini2021VariationalIW,
  title={Variational Inference with Continuously-Indexed Normalizing Flows},
  author={Anthony L. Caterini and Robert Cornish and D. Sejdinovic and A. Doucet},
  booktitle={UAI},
  year={2021}
}
Continuously-indexed flows (CIFs) have recently achieved improvements over baseline normalizing flows in a variety of density estimation tasks. In this paper, we adapt CIFs to the task of variational inference (VI) through the framework of auxiliary VI, and demonstrate that the advantages of CIFs over baseline flows can also translate to the VI setting for both sampling from posteriors with complicated topology and performing maximum likelihood estimation in latent-variable models. 

Figures and Tables from this paper

Continuous Latent Process Flows

TLDR
CLPF is a principled architecture decoding continuous latent processes into continuous observable processes using a time-dependent normalizing flow driven by a stochastic differential equation and a novel piecewise construction of a variational posterior process that derives the corresponding variational lower bound using importance weighting of trajectories.

Conditional Deep Inverse Rosenblatt Transports

TLDR
A novel offline-online method to mitigate the computational burden of the characterization of conditional beliefs in statistical learning and presents novel heuristics to reorder and/or reparametrize the variables to enhance the approximation power of TT.

Discretely Indexed Flows

TLDR
DIF are built as an extension of Normalizing Flows, in which the deterministic transport becomes stochastic, and more precisely discretely indexed, and are better suited for capturing distributions with discontinuities, sharp edges and details.

On Incorporating Inductive Biases into VAEs

TLDR
InteL-VAEs are able to directly enforce desired characteristics in generative models, and bypass the computation and encoder design issues caused by non-Gaussian priors, while allowing for additional flexibility through training of the parametric mapping function.

Automatic variational inference with cascading flows

TLDR
Cascading flows are introduced, a new family of variational programs that can be constructed automatically from an input probabilistic program and can also be amortized automatically that have much higher performance than both normalizing flows and ASVI in a large set of structured inference problems.

Deep Composition of Tensor Trains using Squared Inverse Rosenblatt Transports

  • T. CuiS. Dolgov
  • Computer Science
    Foundations of Computational Mathematics
  • 2021
TLDR
The proposed order-preserving functional tensor-train transport is integrated into a nested variable transformation framework inspired by the layered structure of deep neural networks and significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions.

VAE-Sim: A Novel Molecular Similarity Measure Based on a Variational Autoencoder

TLDR
The VAE vector distances provide a rapid and novel metric for molecular similarity that is both easily and rapidly calculated.

References

SHOWING 1-10 OF 42 REFERENCES

Density estimation using Real NVP

TLDR
This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.

Auto-Encoding Variational Bayes

TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Semi-Implicit Variational Inference

TLDR
With a substantially expanded variational family and a novel optimization algorithm, SIVI is shown to closely match the accuracy of MCMC in inferring the posterior in a variety of Bayesian inference tasks.

Hierarchical Variational Models

TLDR
This work develops hierarchical variational models (HVMs), which augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables.

Hamiltonian Variational Auto-Encoder

TLDR
It is shown here how to optimally select reverse kernels in this setting and, by building upon Hamiltonian Importance Sampling (HIS), a scheme that provides low-variance unbiased estimators of the ELBO and its gradients using the reparameterization trick is obtained.

Variational Inference with Normalizing Flows

TLDR
It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.

Variational Inference using Implicit Distributions

TLDR
This paper provides a unifying review of existing algorithms establishing connections between variational autoencoders, adversarially learned inference, operator VI, GAN-based image reconstruction, and more, and provides a framework for building new algorithms.

Neural Spline Flows

TLDR
This work proposes a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility, and demonstrates that neural spline flows improve density estimation, variational inference, and generative modeling of images.

Neural Autoregressive Flows

TLDR
It is demonstrated that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions.

An Auxiliary Variational Method

TLDR
This work explores the idea of using augmented variable spaces to improve on the standard mean-field bounds, and forms a more powerful class of approximations than any structured mean field technique.