Corpus ID: 235351958

Equivariant Manifold Flows

@article{Katsman2021EquivariantMF,
  title={Equivariant Manifold Flows},
  author={Isay Katsman and Aaron Lou and D. Lim and Qingxuan Jiang and Ser-Nam Lim and Christopher De Sa},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.08596}
}
Tractably modelling distributions over manifolds has long been an important goal in the natural sciences. Recent work has focused on developing general machine learning models to learn such distributions. However, for many applications these distributions must respect manifold symmetries—a trait which most previous models disregard. In this paper, we lay the theoretical foundations for learning symmetry-invariant distributions on arbitrary manifolds via equivariant manifold flows. We… Expand
2 Citations

Figures and Tables from this paper

Implicit Riemannian Concave Potential Maps
TLDR
This work combines ideas from implicit neural layers and optimal transport theory to propose a generalisation of existing work on exponential map flows, Implicit Riemannian Concave Potential Maps, IRCPMs, which have some nice properties such as simplicity of incorporating symmetries and are less expensive than ODE-flows. Expand
Equivariant Discrete Normalizing Flows
At its core, generative modeling seeks to uncover the underlying factors that give rise to observed data that can often be modelled as the natural symmetries that manifest themselves throughExpand

References

SHOWING 1-10 OF 32 REFERENCES
Neural Ordinary Differential Equations on Manifolds
TLDR
It is shown how vector fields provide a general framework for parameterizing a flexible class of invertible mapping on these spaces and it is illustrated how gradient based learning can be performed. Expand
Neural Manifold Ordinary Differential Equations
TLDR
This paper introduces Neural Manifolds Ordinary Differential Equations, a manifold generalization of Neural ODEs, which enables the construction of Manifold Continuous Normalizing Flows (MCNFs), and finds that leveraging continuous manifold dynamics produces a marked improvement for both density estimation and downstream tasks. Expand
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups
TLDR
It is proved that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Expand
A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning
TLDR
A novel hyperbolic distribution calledpseudo-hyperbolic Gaussian, a Gaussian-like distribution on hyper bolic space whose density can be evaluated analytically and differentiated with respect to the parameters, enables the gradient-based learning of the probabilistic models onHyperbolic space that could never have been considered before. Expand
Scalable Reversible Generative Models with Free-form Continuous Dynamics
A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requiresExpand
How to generate random matrices from the classical compact groups
We discuss how to generate random unitary matrices from the classical compact groups U(N), O(N) and USp(N) with probability distributions given by the respective invariant measures. The algorithm isExpand
Introduction to Smooth Manifolds
Preface.- 1 Smooth Manifolds.- 2 Smooth Maps.- 3 Tangent Vectors.- 4 Submersions, Immersions, and Embeddings.- 5 Submanifolds.- 6 Sard's Theorem.- 7 Lie Groups.- 8 Vector Fields.- 9 Integral CurvesExpand
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
Deep Sets
TLDR
The main theorem characterizes the permutation invariant objective functions and provides a family of functions to which any permutation covariant objective function must belong, which enables the design of a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. Expand
...
1
2
3
4
...