Variational Autoencoder with Learned Latent Structure
@article{Connor2020VariationalAW, title={Variational Autoencoder with Learned Latent Structure}, author={Marissa Connor and Gregory H. Canal and Christopher J. Rozell}, journal={ArXiv}, year={2020}, volume={abs/2006.10597} }
The manifold hypothesis states that high-dimensional data can be modeled as lying on or near a low-dimensional, nonlinear manifold. Variational Autoencoders (VAEs) approximate this manifold by learning mappings from low-dimensional latent vectors to high-dimensional data while encouraging a global structure in the latent space through the use of a specified prior distribution. When this prior does not match the structure of the true data manifold, it can lead to a less accurate model of the…
Figures and Tables from this paper
14 Citations
Addressing the Topological Defects of Disentanglement via Distributed Operators
- Computer ScienceArXiv
- 2021
This work theoretically and empirically demonstrate the effectiveness of an alternative, more flexible approach to disentanglement which relies on distributed latent operators, potentially acting on the entire latent space.
Variational Sparse Coding with Learned Thresholding
- Computer ScienceICML
- 2022
This work proposes a new approach to variational sparse coding that allows us to learn sparse distributions by thresholding samples, avoiding the use of problematic relaxations and has superior performance, statistical efficiency, and gradient estimation compared to other sparse distributions.
Learning Identity-Preserving Transformations on Data Manifolds
- Computer ScienceArXiv
- 2021
This work introduces a learning strategy that does not require transformation labels and develops a method that learns the local regions where each operator is likely to be used while preserving the identity of inputs.
UvA-DARE (Digital Academic Repository) Topographic VAEs learn Equivariant Capsules
- Computer Science
- 2022
The Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables is introduced and it is shown that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
Homomorphic Self-Supervised Learning
- Computer ScienceArXiv
- 2022
It is observed that many existing self-supervised learning algorithms can be both unified and generalized when seen through the lens of equivariant representations and a general framework is introduced, called Homomorphic Self-Supervised Learning, which theoretically shows how it may subsume the use of input-augmentations provided an augmentation-homomorphic feature extractor.
Robust Self-Supervised Learning with Lie Groups
- Computer ScienceArXiv
- 2022
This work proposes a framework for instilling a notion of how objects vary in more realistic settings through the formalism of Lie groups to improve models’ robustness to distributional shifts and demonstrates the promise of learning transformations to improve model robustness.
Machine learning in bioprocess development: From promise to practice
- Biology, EngineeringTrends in biotechnology
- 2022
This work demonstrates how ML methods have been applied so far in bioprocess development, especially in strain engineering and selection, biopROcess optimization, scale-up, monitoring, and control of biop rocesses.
A Geometric Perspective on Variational Autoencoders
- Computer ScienceArXiv
- 2022
A new proposed sampling method consists in sampling from the uniform distribution deriving intrinsically from the learned Riemannian latent space and it is shown that using this scheme can make a vanilla VAE competitive and even better than more advanced versions on several benchmark datasets.
Pythae: Unifying Generative Autoencoders in Python - A Benchmarking Use Case
- Computer ScienceArXiv
- 2022
Pythae is presented, a versatile open-source Python library providing both a unified implementation and a dedicated framework allowing straightforward, reproducible and reliable use of generative autoencoder models.
Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics
- Computer ScienceArXiv
- 2022
A new decomposed dynamical system model is proposed that represents complex non-stationary and nonlinear dynamics of time-series data as a sparse combination of simpler, more interpretable components.
References
SHOWING 1-10 OF 27 REFERENCES
Representing Closed Transformation Paths in Encoded Network Latent Space
- Computer ScienceAAAI
- 2020
This work incorporates a generative manifold model into the latent space of an autoencoder in order to learn the low-dimensional manifold structure from the data and adapt the latentspace to accommodate this structure.
The Riemannian Geometry of Deep Generative Models
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2018
The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point.
Metrics for Deep Generative Models
- Computer ScienceAISTATS
- 2018
The method yields a principled distance measure, provides a tool for visual inspection of deep generative models, and an alternative to linear interpolation in latent space and can be applied for robot movement generalization using previously learned skills.
Latent Space Oddity: on the Curvature of Deep Generative Models
- Computer ScienceICLR
- 2018
This work shows that the nonlinearity of the generator imply that the latent space gives a distorted view of the input space, and shows that this distortion can be characterized by a stochastic Riemannian metric, and demonstrates that distances and interpolants are significantly improved under this metric.
Variational Diffusion Autoencoders with Random Walk Sampling
- Computer ScienceECCV
- 2020
A principled measure for recognizing the mismatch between data and latent distributions and a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model are proposed.
Hyperspherical Variational Auto-Encoders
- Computer ScienceUAI
- 2018
This work proposes using a von Mises-Fisher distribution instead of a Gaussian distribution for both the prior and posterior of the Variational Auto-Encoder, leading to a hyperspherical latent space.
Variational Autoencoders with Riemannian Brownian Motion Priors
- Computer ScienceICML
- 2020
This work assumes a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replaces the standard Gaussian prior with a R Siemannian Brownian motion prior, and demonstrates that this prior significantly increases model capacity using only one additional scalar parameter.
Mixed-curvature Variational Autoencoders
- Computer ScienceICLR
- 2020
A Mixed-curvature Variational Autoencoder is developed, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable.
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
- Computer ScienceNeurIPS
- 2019
This work endow VAEs with a Poincare ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space.
Importance Weighted Autoencoders
- Computer ScienceICLR
- 2016
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.