• Corpus ID: 220936528

Geometrically Enriched Latent Spaces

@inproceedings{Arvanitidis2021GeometricallyEL,
  title={Geometrically Enriched Latent Spaces},
  author={Georgios Arvanitidis and S{\o}ren Hauberg and Bernhard Sch{\"o}lkopf},
  booktitle={AISTATS},
  year={2021}
}
A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space. Instead, we consider the ambient space to be a Riemannian manifold, which allows for encoding domain knowledge through the associated Riemannian metric. Shortest paths can then be defined accordingly in the latent space to both follow the learned manifold and respect the ambient geometry. Through careful design of the ambient metric we can ensure that shortest paths are well… 
A prior-based approximate latent Riemannian metric
TLDR
This work proposes a surrogate conformal Riemannian metric in the latent space of a generative model that is simple, efficient and robust, and shows the applicability of the proposed methodology for data analysis in the life sciences.
Pulling back information geometry
TLDR
This work proposes to use the Fisher-Rao metric associated with the space of decoder distributions as a reference metric, which is pulled back to the latent space, and shows that it can achieve meaningful latent geometries for a wide range of decoding distributions for which the previous theory was not applicable.
GELATO: Geometrically Enriched Latent Model for Offline Reinforcement Learning
TLDR
This work imposes a latent representation of states and actions and leverage its intrinsic Riemannian geometry to measure distance of latent samples to the data and integrates its metrics in a model-based offline optimization framework, in which proximity and uncertainty can be carefully controlled.
Bayesian Quadrature on Riemannian Data Manifolds
TLDR
This work focuses on Bayesian quadrature ( bq) to numerically compute integrals over normal laws on Riemannian manifolds learned from data and shows that by leveraging both prior knowledge and an active exploration scheme, bq outperforms Monte Carlo methods on a wide range of integration problems.
How to train your conditional GAN: An approach using geometrically structured latent manifolds
TLDR
This paper argues that the limited diversity of the vanilla cGANs is not due to a lack of capacity, but a result of non-optimal training schemes, and proposes a novel training mechanism that increases both the diversity and the visual quality of the Vanilla cGAN.
Discriminating Against Unrealistic Interpolations in Generative Adversarial Networks
TLDR
It is established that the discriminator can be used effectively to avoid regions of low sample quality along shortest paths and proposed a lightweight solution for improved interpolations in pre-trained GANs.
Rethinking conditional GAN training: An approach using geometrically structured latent manifolds
TLDR
This work proposes a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN, by systematically encouraging a bi-lipschitz mapping between the latent and the output manifolds.
Geometry and Generalization: Eigenvalues as predictors of where a network will fail to generalize
TLDR
It is shown that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the mean squared errors on test points, which is a dataset independent means of testing an autoencoder's ability to generalize on new input.
Learning Riemannian Manifolds for Geodesic Motion Skills
TLDR
This work proposes to learn a Riemannian manifold from human demonstrations on which geodesics are natural motion skills, and realizes this with a variational autoencoder (VAE) over the space of position and orientations of the robot end-effector.
GeomCA: Geometric Evaluation of Data Representations
TLDR
This work presents Geometric Component Analysis (GeomCA), an algorithm that evaluates representation spaces based on their geometric and topological properties and can be applied to representations of any dimension, independently of the model that generated them.
...
...

References

SHOWING 1-10 OF 55 REFERENCES
The Riemannian Geometry of Deep Generative Models
TLDR
The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point.
Latent Space Oddity: on the Curvature of Deep Generative Models
TLDR
This work shows that the nonlinearity of the generator imply that the latent space gives a distorted view of the input space, and shows that this distortion can be characterized by a stochastic Riemannian metric, and demonstrates that distances and interpolants are significantly improved under this metric.
Metrics for Probabilistic Geometries
TLDR
The geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry is investigated and distances that respect the expected metric lead to more appropriate generation of new data.
Expected path length on random manifolds
TLDR
This work endow the latent space of a large class of generative models with a random Riemannian metric, which provides them with elementary operators and researches deterministic approximations and tight error bounds on expected distances.
Fast Approximate Geodesics for Deep Generative Models
TLDR
This work proposes finding shortest paths in a finite graph of samples from the aggregate approximate posterior, that can be solved exactly, at greatly reduced runtime, and without a notable loss in quality.
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
TLDR
This work endow VAEs with a Poincare ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space.
Geodesic Clustering in Deep Generative Models
TLDR
It is demonstrated that taking the geometry of the generative model into account is sufficient to make simple clustering algorithms work well over latent representations, and an efficient algorithm is proposed for computing geodesics (shortest paths) and computing distances in the latent space, while taking its distortion into account.
Only Bayes should learn a manifold
TLDR
This document investigates learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space, and partly extends the analysis to models based on neural networks, thereby highlighting geometric and probabilistic shortcomings of current deep generative models.
Metrics for Deep Generative Models
TLDR
The method yields a principled distance measure, provides a tool for visual inspection of deep generative models, and an alternative to linear interpolation in latent space and can be applied for robot movement generalization using previously learned skills.
Learning disconnected manifolds: a no GANs land
TLDR
A no free lunch theorem for the disconnected manifold learning is established stating an upper bound on the precision of the targeted distribution and a rejection sampling method is derived based on the norm of generators Jacobian and shown its efficiency on several generators including BigGAN.
...
...