Corpus ID: 59842895

A Differentiable Gaussian-like Distribution on Hyperbolic Space for Gradient-Based Learning

@article{Nagano2019ADG,
  title={A Differentiable Gaussian-like Distribution on Hyperbolic Space for Gradient-Based Learning},
  author={Yoshihiro Nagano and Shoichiro Yamaguchi and Yasuhiro Fujita and Masanori Koyama},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.02992}
}
Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure. [...] Key Method Also, we can sample from this hyperbolic probability distribution without resorting to auxiliary means like rejection sampling. As applications of our distribution, we develop a hyperbolic-analog of variational autoencoder and a method of probabilistic word embedding on hyperbolic space. We demonstrate the efficacy of our distribution on various datasets…Expand
Hierarchical Representations with Poincaré Variational Auto-Encoders
TLDR
This work endow VAE with a Poincar\'e ball model of hyperbolic geometry and derive the necessary methods to work with two main Gaussian generalisations on that space. Expand
Mixed-curvature Variational Autoencoders
TLDR
A Mixed-curvature Variational Autoencoder is developed, an efficient way to train a VAE whose latent space is a product of constant curvature Riemannian manifolds, where the per-component curvature is fixed or learnable. Expand
Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders
TLDR
This work endow VAEs with a Poincare ball model of hyperbolic geometry as a latent space and rigorously derive the necessary methods to work with two main Gaussian generalisations on that space. Expand
Increasing Expressivity of a Hyperspherical VAE
Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning. While traditionally the GaussianExpand
Increasing Expressivity of a Hyperspherical VAE
TLDR
This work proposes to extend the usability of hyperspherical parameterizations to higher dimensions using a product-space instead, showing improved results on a selection of image datasets. Expand
Poincaré Wasserstein Autoencoder
TLDR
This work presents a reformulation of the recently proposed Wasserstein autoencoder framework on a non-Euclidean manifold, the Poincare ball model of the hyperbolic space, and can use its intrinsic hierarchy to impose structure on the learned latent space representations. Expand
Understanding in Artificial Intelligence
TLDR
How progress has been made in benchmark development to measure understanding capabilities of AI methods is shown and as well how current methods develop understanding capabilities are reviewed. Expand
MOOC-Based Mixed Teaching Research on Microcomputer Principle Courses in Colleges and Universities
TLDR
This paper synthetically compares the advantages and disadvantages of MOOC teaching mode with traditional teaching mode and constructs a teaching platform based on MOOC and solves the problems of traditional teaching. Expand
Riemannian Continuous Normalizing Flows
TLDR
Riemannian continuous normalizing flows is introduced, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations. Expand
Variational Autoencoders with Riemannian Brownian Motion Priors
TLDR
This work assumes a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replaces the standard Gaussian prior with a R Siemannian Brownian motion prior, and demonstrates that this prior significantly increases model capacity using only one additional scalar parameter. Expand

References

SHOWING 1-10 OF 25 REFERENCES
A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning
TLDR
A novel hyperbolic distribution calledpseudo-hyperbolic Gaussian, a Gaussian-like distribution on hyper bolic space whose density can be evaluated analytically and differentiated with respect to the parameters, enables the gradient-based learning of the probabilistic models onHyperbolic space that could never have been considered before. Expand
Poincaré Embeddings for Learning Hierarchical Representations
TLDR
This work introduces a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincare ball -- and introduces an efficient algorithm to learn the embeddings based on Riemannian optimization. Expand
Representation Tradeoffs for Hyperbolic Embeddings
TLDR
A hyperbolic generalization of multidimensional scaling (h-MDS), which offers consistently low distortion even with few dimensions across several datasets, is proposed and a PyTorch-based implementation is designed that can handle incomplete information and is scalable. Expand
Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry
TLDR
It is shown that an embedding in hyperbolic space can reveal important aspects of a company's organizational structure as well as reveal historical relationships between language families. Expand
Hyperbolic Attention Networks
TLDR
This work introduces hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure and re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Expand
Hyperbolic Entailment Cones for Learning Hierarchical Embeddings
TLDR
This work presents a novel method to embed directed acyclic graphs through hierarchical relations as partial orders defined using a family of nested geodesically convex cones and proves that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces. Expand
Adversarial Autoencoders with Constant-Curvature Latent Manifolds
TLDR
This work introduces the CCM adversarial autoencoder (CCM-AAE), a probabilistic generative model trained to represent a data distribution on a CCM, and is the first unified framework to seamlessly deal with CCMs of different curvatures. Expand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Discrete Variational Autoencoders
  • J. Rolfe
  • Mathematics, Computer Science
  • ICLR
  • 2017
TLDR
A novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete hidden variables, which outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. Expand
Categorical Reparameterization with Gumbel-Softmax
TLDR
It is shown that the Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. Expand
...
1
2
3
...