• Corpus ID: 238215263

Flow Based Models For Manifold Data

@article{Zhang2021FlowBM,
  title={Flow Based Models For Manifold Data},
  author={Mingtian Zhang and Yitong Sun and Steven G. McDonagh and Chen Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.14216}
}
Flow-based generative models typically define a latent space with dimensionality identical to the observational space. In many problems, however, the data does not populate the full ambient data-space that they natively reside in, rather inhabiting a lower-dimensional manifold. In such scenarios, flow-based models are unable to represent data structures exactly as their density will always have support off the data manifold, potentially resulting in degradation of model performance. In addition… 

Improving VAE-based Representation Learning

It is shown that by using a decoder that prefers to learn local features, the remaining global features can be well captured by the latent, which significantly improves performance of a downstream classi-cation task.

Towards Healing the Blindness of Score Matching

The blindness problem is discussed and a new family of divergences is proposed that can mitigate the blindness problem in the context of density estimation and improved performance compared to traditional approaches is reported.

Conditional Injective Flows for Bayesian Imaging

C-Trumpets are proposed—conditional injective flows specifically designed for imaging problems, which greatly diminish these challenges of ill-posedness, nonlinearity, model mismatch, and noise and enable fast approximation of point estimates like MMSE or MAP as well as physically-meaningful uncertainty quantiflcation.

References

SHOWING 1-10 OF 26 REFERENCES

Flows for simultaneous manifold learning and density estimation

We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining

Algorithms for manifold learning

The motivation, background, and algorithms proposed for manifold learning are discussed and Isomap, Locally Linear Embedding, Laplacian Eigenmaps, Semidefinite Embeddings, and a host of variants of these algorithms are examined.

Regularized Autoencoders via Relaxed Injective Probability Flow

A generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity is proposed, which provides another perspective on regularized autoencoders (RAEs), with the final objectives resembling RAEs with specific regularizers that are derived by lower bounding the probability flow objective.

Estimating the intrinsic dimension of datasets by a minimal neighborhood information

A new ID estimator using only the distance of the first and the second nearest neighbor of each point in the sample is proposed, which enables us to reduce the effects of curvature, of density variation, and the resulting computational cost.

Glow: Generative Flow with Invertible 1x1 Convolutions

Glow, a simple type of generative flow using an invertible 1x1 convolution, is proposed, demonstrating that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images.

Normalizing Flows for Probabilistic Modeling and Inference

This review places special emphasis on the fundamental principles of flow design, and discusses foundational topics such as expressive power and computational trade-offs, and summarizes the use of flows for tasks such as generative modeling, approximate inference, and supervised learning.

Intrinsic dimensionality estimation of submanifolds in Rd

The proposed method to estimate the intrinsic dimensionality of a submanifold M in Rd from random samples is based on the convergence rates of a certain U-statistic on the manifold and is compared to two standard estimators on several artificial as well as real data sets.

Determining Intrinsic Dimension and Entropy of High-Dimensional Shape Spaces

  • J. CostaA. Hero
  • Mathematics, Computer Science
    Statistics and Analysis of Shapes
  • 2006
This chapter provides proofs of strong consistency of these estimators of dimension and entropy based on the lengths of the geodesic minimal spanning tree (GMST) and the k-nearest neighbor (k-NN) graph under weak assumptions of compactness of the manifold and boundedness ofThe Lebesgue sampling density supported on the manifold is illustrated.

Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

This work presents a hierarchical VAE that, for the first time, outperforms the PixelCNN in log-likelihood on all natural image benchmarks and visualize the generative process and show the VAEs learn efficient hierarchical visual representations.