# Geometrically Enriched Latent Spaces

@inproceedings{Arvanitidis2021GeometricallyEL, title={Geometrically Enriched Latent Spaces}, author={Georgios Arvanitidis and S{\o}ren Hauberg and Bernhard Sch{\"o}lkopf}, booktitle={AISTATS}, year={2021} }

A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space. Instead, we consider the ambient space to be a Riemannian manifold, which allows for encoding domain knowledge through the associated Riemannian metric. Shortest paths can then be defined accordingly in the latent space to both follow the learned manifold and respect the ambient geometry. Through careful design of the ambient metric we can ensure that shortest paths are well…

## Figures from this paper

## 15 Citations

GELATO: Geometrically Enriched Latent Model for Offline Reinforcement Learning

- Computer ScienceArXiv
- 2021

This work imposes a latent representation of states and actions and leverage its intrinsic Riemannian geometry to measure distance of latent samples to the data and integrates its metrics in a model-based offline optimization framework, in which proximity and uncertainty can be carefully controlled.

How to train your conditional GAN: An approach using geometrically structured latent manifolds

- Computer ScienceArXiv
- 2020

This paper argues that the limited diversity of the vanilla cGANs is not due to a lack of capacity, but a result of non-optimal training schemes, and proposes a novel training mechanism that increases both the diversity and the visual quality of the Vanilla cGAN.

Discriminating Against Unrealistic Interpolations in Generative Adversarial Networks

- Computer ScienceArXiv
- 2022

It is established that the discriminator can be used effectively to avoid regions of low sample quality along shortest paths and proposed a lightweight solution for improved interpolations in pre-trained GANs.

Geometry and Generalization: Eigenvalues as predictors of where a network will fail to generalize

- Computer Science, MathematicsFoundations of Data Science
- 2022

It is shown that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the mean squared errors on test points, which is a dataset independent means of testing an autoencoder's ability to generalize on new input.

Learning Riemannian Manifolds for Geodesic Motion Skills

- Computer ScienceRobotics: Science and Systems
- 2021

This work proposes to learn a Riemannian manifold from human demonstrations on which geodesics are natural motion skills, and realizes this with a variational autoencoder (VAE) over the space of position and orientations of the robot end-effector.

GeomCA: Geometric Evaluation of Data Representations

- Computer ScienceICML
- 2021

This work presents Geometric Component Analysis (GeomCA), an algorithm that evaluates representation spaces based on their geometric and topological properties and can be applied to representations of any dimension, independently of the model that generated them.

Reactive Motion Generation on Learned Riemannian Manifolds

- Computer ScienceArXiv
- 2022

It is argued that Riemannian manifolds may be learned via human demonstrations in which geodesics are natural motion skills, and proposed technique for facilitating on-the-fly end-effector/multiple-limb obstacle avoidance by reshaping the learned manifold using an obstacleaware ambient metric.

Geometric instability of out of distribution data across autoencoder architecture

- Computer ScienceArXiv
- 2022

This paper considers an autoencoder as a function from input to reconstruction space, such that one can consider its Jacobian matrix at any input point, and considers the geometry on points very far out of distribution.

A survey of algorithmic recourse:contrastive explanations and consequential recommendations

- Computer ScienceACM Computing Surveys
- 2022

This work focuses on algorithmic recourse, which is concerned with providing explanations and recommendations to individuals who are unfavorably treated by automated decision-making systems, and performs an extensive literature review.

A prior-based approximate latent Riemannian metric

- Computer ScienceArXiv
- 2021

This work proposes a surrogate conformal Riemannian metric in the latent space of a generative model that is simple, efficient and robust, and shows the applicability of the proposed methodology for data analysis in the life sciences.

## References

SHOWING 1-10 OF 55 REFERENCES

The Riemannian Geometry of Deep Generative Models

- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- 2018

The Riemannian geometry of these generated manifolds is investigated and it is shown how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point.

Latent Space Oddity: on the Curvature of Deep Generative Models

- Computer ScienceICLR
- 2018

This work shows that the nonlinearity of the generator imply that the latent space gives a distorted view of the input space, and shows that this distortion can be characterized by a stochastic Riemannian metric, and demonstrates that distances and interpolants are significantly improved under this metric.

Metrics for Probabilistic Geometries

- Computer Science, MathematicsUAI
- 2014

The geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry is investigated and distances that respect the expected metric lead to more appropriate generation of new data.

Expected path length on random manifolds

- Computer ScienceArXiv
- 2019

This work endow the latent space of a large class of generative models with a random Riemannian metric, which provides them with elementary operators and researches deterministic approximations and tight error bounds on expected distances.

Fast Approximate Geodesics for Deep Generative Models

- Computer ScienceICANN
- 2019

This work proposes finding shortest paths in a finite graph of samples from the aggregate approximate posterior, that can be solved exactly, at greatly reduced runtime, and without a notable loss in quality.

Geodesic Clustering in Deep Generative Models

- Computer ScienceArXiv
- 2018

It is demonstrated that taking the geometry of the generative model into account is sufficient to make simple clustering algorithms work well over latent representations, and an efficient algorithm is proposed for computing geodesics (shortest paths) and computing distances in the latent space, while taking its distortion into account.

Only Bayes should learn a manifold

- Computer Science
- 2019

This document investigates learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space, and partly extends the analysis to models based on neural networks, thereby highlighting geometric and probabilistic shortcomings of current deep generative models.

Metrics for Deep Generative Models

- Computer ScienceAISTATS
- 2018

The method yields a principled distance measure, provides a tool for visual inspection of deep generative models, and an alternative to linear interpolation in latent space and can be applied for robot movement generalization using previously learned skills.

Adversarial Feature Learning

- Computer ScienceICLR
- 2017

Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.

Fast and Robust Shortest Paths on Manifolds Learned from Data

- Computer ScienceAISTATS
- 2019

A fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data that enhances the stability of the solver, while reduces the computational cost.