• Corpus ID: 235313725

Rectangular Flows for Manifold Learning

@inproceedings{Caterini2021RectangularFF,
  title={Rectangular Flows for Manifold Learning},
  author={Anthony L. Caterini and Gabriel Loaiza-Ganem and Geoff Pleiss and John P. Cunningham},
  booktitle={NeurIPS},
  year={2021}
}
Normalizing flows allow for tractable maximum likelihood estimation of their parameters but are incapable of modelling low-dimensional manifold structure in observed data. Flows which injectively map from lowto high-dimensional space provide promise for fixing this issue, but the resulting likelihood-based objective becomes more challenging to evaluate. Current approaches avoid computing the entire objective – which may induce pathological behaviour – or assume the manifold structure is known… 

P RINCIPAL MANIFOLD FLOWS

Normalizing flows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level

Principal Manifold Flows

Normalizing flows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level

Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

TLDR
This paper proposes a class of two-step procedures consist-ing of a dimensionality reduction step followed by maximum-likelihood density estimation, and proves that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting.

Joint Manifold Learning and Density Estimation Using Normalizing Flows

TLDR
A single-step method for joint manifold learning and density estimation by disentangling the transformed space obtained by normalizing flows to manifold and off-manifold parts and a hierarchical training approach to improve the density estimation on the sub- manifold is proposed.

Neural Implicit Manifold Learning for Topology-Aware Generative Modelling

TLDR
Constrained energy-based models are introduced, which use a constrained variant of Langevin dynamics to train and sample within a learned manifold and can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

The Union of Manifolds Hypothesis and its Implications for Deep Generative Modelling

TLDR
It is shown that clustered DGMs can model multiple connected components with different intrinsic dimensions, and empirically outperform their non-clustered counterparts without increasing computational requirements.

Energy Flows: Towards Determinant-Free Training of Normalizing Flows

TLDR
This paper introduces an approach for determinant-free training of flows inspired by two-sample testing, a multidimensional extension of proper scoring rules that admits efficient estimators based on random projections and that outperforms a range of alternative two- sample objectives that can be derived in the framework.

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

TLDR
It is proved that, in this regime, the optimal encoder approximately inverts the decoder, which allows VAEs to perform what has recently been termed independent mechanism analysis (IMA): it adds an inductive bias towards decoders with column-orthogonal Jacobians, which helps recovering the true latent factors.

Flowification: Everything is a Normalizing Flow

TLDR
The efficacy of linear and convolutional layers for the task of density estimation on standard datasets is investigated and the results suggest standard layers lack something fundamental in comparison to other normalizing flows.

Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

In this work, we provide an exact likelihood alternative to the variational training of generative autoencoders. We show that VAE-style autoencoders can be constructed using invertible layers, which

References

SHOWING 1-10 OF 70 REFERENCES

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

TLDR
Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.

Flows for simultaneous manifold learning and density estimation

We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining

Masked Autoregressive Flow for Density Estimation

TLDR
This work describes an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data, which is called Masked Autoregressive Flow.

Density estimation using Real NVP

TLDR
This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.

Relaxing Bijectivity Constraints with Continuously Indexed Normalising Flows

TLDR
It is proved that a flow must become arbitrarily numerically noninvertible in order to approximate the target closely, and proposed Continuously Indexed Flows (CIFs) are proposed, which replace the single bijection used by normalising flows with a continuously indexed family of bijections.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Numerical Optimization

no exception. MRP II and JIT=TQC in purchasing and supplier education are covered in Chapter 15. Without proper education MRP II and JIT=TQC will not be successful and will not generate their true

Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

TLDR
It is argued that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data, and a series of conformal building blocks are presented and applied in experiments to demonstrate that flows can model manifolds with tractable densities without sacrificing tractable likelihoods.

Implicit Normalizing Flows

Normalizing flows define a probability distribution by an explicit invertible transformation z = f(x). In this work, we present implicit normalizing flows (ImpFlows), which generalize normalizing

Bias-Free Scalable Gaussian Processes via Randomized Truncations

TLDR
This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF) and finds that both methods introduce a systematic bias on the learned hyperparameters: CG tends to underfit while RFF tends to overfit.
...