# Rectangular Flows for Manifold Learning

@inproceedings{Caterini2021RectangularFF, title={Rectangular Flows for Manifold Learning}, author={Anthony L. Caterini and Gabriel Loaiza-Ganem and Geoff Pleiss and John P. Cunningham}, booktitle={NeurIPS}, year={2021} }

Normalizing flows allow for tractable maximum likelihood estimation of their parameters but are incapable of modelling low-dimensional manifold structure in observed data. Flows which injectively map from lowto high-dimensional space provide promise for fixing this issue, but the resulting likelihood-based objective becomes more challenging to evaluate. Current approaches avoid computing the entire objective – which may induce pathological behaviour – or assume the manifold structure is known…

## Figures and Tables from this paper

## 13 Citations

P RINCIPAL MANIFOLD FLOWS

- Mathematics
- 2022

Normalizing ﬂows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level…

Principal Manifold Flows

- MathematicsArXiv
- 2022

Normalizing flows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level…

Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

- Computer ScienceArXiv
- 2022

This paper proposes a class of two-step procedures consist-ing of a dimensionality reduction step followed by maximum-likelihood density estimation, and proves that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overﬁtting.

Joint Manifold Learning and Density Estimation Using Normalizing Flows

- Computer ScienceArXiv
- 2022

A single-step method for joint manifold learning and density estimation by disentangling the transformed space obtained by normalizing ﬂows to manifold and oﬀ-manifold parts and a hierarchical training approach to improve the density estimation on the sub- manifold is proposed.

Neural Implicit Manifold Learning for Topology-Aware Generative Modelling

- Computer ScienceArXiv
- 2022

Constrained energy-based models are introduced, which use a constrained variant of Langevin dynamics to train and sample within a learned manifold and can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

The Union of Manifolds Hypothesis and its Implications for Deep Generative Modelling

- Computer ScienceArXiv
- 2022

It is shown that clustered DGMs can model multiple connected components with different intrinsic dimensions, and empirically outperform their non-clustered counterparts without increasing computational requirements.

Energy Flows: Towards Determinant-Free Training of Normalizing Flows

- Computer ScienceArXiv
- 2022

An approach for determinant-free training of flows inspired by two-sample testing is introduced, a multidimensional extension of proper scoring rules that admits efficient estimators based on random projections and that outperforms a range of alternative two- sample objectives that can be derived in this framework.

Embrace the Gap: VAEs Perform Independent Mechanism Analysis

- Computer ScienceArXiv
- 2022

It is proved that, in this regime, the optimal encoder approximately inverts the decoder, which allows VAEs to perform what has recently been termed independent mechanism analysis (IMA): it adds an inductive bias towards decoders with column-orthogonal Jacobians, which helps recovering the true latent factors.

Flowification: Everything is a Normalizing Flow

- Computer ScienceArXiv
- 2022

The efﬁcacy of linear and convolutional layers for the task of density estimation on standard datasets is investigated and the results suggest standard layers lack something fundamental in comparison to other normalizing ﬂows.

Closing the gap: Exact maximum likelihood training of generative autoencoders using invertible layers

- Computer Science
- 2022

In this work, we provide an exact likelihood alternative to the variational training of generative autoencoders. We show that VAE-style autoencoders can be constructed using invertible layers, which…

## References

SHOWING 1-10 OF 76 REFERENCES

Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

- Computer ScienceArXiv
- 2017

Fashion-MNIST is intended to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits.

Flows for simultaneous manifold learning and density estimation

- Computer ScienceNeurIPS
- 2020

We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining…

PyTorch: An Imperative Style, High-Performance Deep Learning Library

- Computer ScienceNeurIPS
- 2019

This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.

Masked Autoregressive Flow for Density Estimation

- Mathematics, Computer ScienceNIPS
- 2017

This work describes an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data, which is called Masked Autoregressive Flow.

Density estimation using Real NVP

- Computer ScienceICLR
- 2017

This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

- Computer ScienceNIPS
- 2017

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Numerical Optimization

- BusinessJ. Oper. Res. Soc.
- 2001

no exception. MRP II and JIT=TQC in purchasing and supplier education are covered in Chapter 15. Without proper education MRP II and JIT=TQC will not be successful and will not generate their true…

Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

- Computer ScienceNeurIPS
- 2021

It is argued that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data, and a series of conformal building blocks are presented and applied in experiments to demonstrate that flows can model manifolds with tractable densities without sacrificing tractable likelihoods.

Implicit Normalizing Flows

- Computer ScienceICLR
- 2021

Normalizing flows define a probability distribution by an explicit invertible transformation z = f(x). In this work, we present implicit normalizing flows (ImpFlows), which generalize normalizing…

Bias-Free Scalable Gaussian Processes via Randomized Truncations

- Computer ScienceICML
- 2021

This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF) and finds that both methods introduce a systematic bias on the learned hyperparameters: CG tends to underfit while RFF tends to overfit.