• Corpus ID: 235485195

Riemannian Convex Potential Maps

  title={Riemannian Convex Potential Maps},
  author={Samuel Cohen and Brandon Amos and Yaron Lipman},
Modeling distributions on Riemannian manifolds is a crucial component in understanding nonEuclidean data that arises, e.g., in physics and geology. The budding approaches in this space are limited by representational and computational tradeoffs. We propose and study a class of flows that uses convex potentials from Riemannian optimal transport. These are universal and can model distributions on any compact Riemannian manifold without requiring domain knowledge of the manifold to be integrated… 

Figures and Tables from this paper

Implicit Riemannian Concave Potential Maps

This work combines ideas from implicit neural layers and optimal transport theory to propose a generalisation of existing work on exponential map flows, Implicit Riemannian Concave Potential Maps, IRCPMs, which have some nice properties such as simplicity of incorporating symmetries and are less expensive than ODE-flows.

Riemannian Score-Based Generative Modeling

RSGMs are introduced, a class of generative models extending SGMs to compact Riemannian manifolds and demonstrating their approach on a variety of manifolds, and in particular with earth and climate science spherical data.

Spherical Sliced-Wasserstein

The construction is notably based on closed-form solutions of the Wasserstein distance on the circle, together with a new spherical Radon transform, which is illustrated in several machine learning use cases where spherical representations of data are at stake: density estimation on the sphere, variational inference or hyperspherical auto-encoders.

Conformal Mirror Descent with Logarithmic Divergences

This work introduces a generalization of continuous time mirror descent that is a time change of a corresponding Hessian gradient, and proves convergence results in continuous time.


In recent years, the neural stochastic differential equation (NSDE) has gained attention in modeling stochastic representations, while NSDE brings a great success in various types of applications.

Transport away your problems: Calibrating stochastic simulations with optimal transport

  • Chris PollardP. Windischhofer
  • Computer Science
    Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
  • 2021

Meta Optimal Transport

The use of amortized optimization to predict optimal transport maps from the input measures is studied to improve the computational time of standard OT solvers by multiple orders of magnitude in discrete and continuous transport settings between images, spherical data, and color palettes.

Symmetry-Based Representations for Artificial and Biological General Intelligence

It is argued that symmetry transformations are a fundamental principle that can guide the search for what makes a good representation, and may be an important general framework that determines the structure of the universe, constrains the nature of natural tasks and consequently shapes both biological and artificial intelligence.

Tutorial on amortized optimization for learning to optimize over continuous domains

This tutorial discusses the key design choices behind amortized optimization, roughly categorizing models into fully-amortized and semi-Amortized approaches, and learning methods into regression-based and objectivebased approaches.



Riemannian Continuous Normalizing Flows

Riemannian continuous normalizing flows is introduced, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations.

Continuity of optimal transport maps and convexity of injectivity domains on small deformations of 𝕊2

Given a compact Riemannian manifold, we study the regularity of the optimal transport map between two probability measures with cost given by the squared Riemannian distance. Our strategy is to

A Jacobian Inequality for Gradient Maps on the Sphere and Its Application to Directional Statistics

In the field of optimal transport theory, an optimal map is known to be a gradient map of a potential function satisfying cost-convexity. In this article, the Jacobian determinant of a gradient map

Towards the smoothness of optimal maps on Riemannian submersions and Riemannian products (of round spheres in particular)

Abstract The variant A3w of Ma, Trudinger and Wang's condition for regularity of optimal transportation maps is implied by the non-negativity of a pseudo-Riemannian curvature—which we call

Regularity of optimal transport maps on multiple products of spheres

This article addresses regularity of optimal transport maps for cost=“squared distance” on Riemannian manifolds that are products of arbitrarily many round spheres with arbitrary sizes and

Neural Manifold Ordinary Differential Equations

This paper introduces Neural Manifolds Ordinary Differential Equations, a manifold generalization of Neural ODEs, which enables the construction of Manifold Continuous Normalizing Flows (MCNFs), and finds that leveraging continuous manifold dynamics produces a marked improvement for both density estimation and downstream tasks.

Normalizing Flows on Tori and Spheres

This paper proposes and compares expressive and numerically stable flows on spaces with more complex geometries, such as tori or spheres, and builds recursively on the dimension of the space, starting from flows on circles, closed intervals or spheres.

Equivariant Hamiltonian Flows

This paper introduces equivariant hamiltonian flows, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while

On the regularity of solutions of optimal transportation problems

We give a necessary and sufficient condition on the cost function so that the map solution of Monge’s optimal transportation problem is continuous for arbitrary smooth positive data. This condition

Optimal transport mapping via input convex neural networks

This approach ensures that the transport mapping the authors find is optimal independent of how they initialize the neural networks, as gradient of a convex function naturally models a discontinuous transport mapping.