Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers

@article{Guo2021ClippedHC,
  title={Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers},
  author={Yunhui Guo and Xudong Wang and Yubei Chen and Stella X. Yu},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={1-10}
}
Hyperbolic space can naturally embed hierarchies, unlike Euclidean space. Hyperbolic Neural Networks (HNNs) exploit such representational power by lifting Euclidean features into hyperbolic space for classification, outperforming Euclidean neural networks (ENNs) on datasets with known semantic hierarchies. However, HNNs underperform ENNs on standard benchmarks without clear hierarchies, greatly restricting HNNs' applicability in practice. Our key insight is that HNNs' poorer general… 

Hyperbolic Deep Reinforcement Learning

This work designs a new general method that counteracts such optimization challenges and enables stable end-to-end learning with deep hyperbolic representations and empirically validate this framework by applying it to popular on-policy and offpolicy RL algorithms on the Procgen and Atari 100K benchmarks, attaining near universal performance and generalization benefits.

Modeling Semantic Correlation and Hierarchy for Real-world Wildlife Recognition

This work establishes a simple and efficient baseline, including the debiasing loss function and the hyperbolic network architecture, and proposes leveraging the semantic correlation to train the model more effectively by adding a co-occurrence layer to the authors' model during training.

The Numerical Stability of Hyperbolic Representation Learning

This work carefully analyzes the limitation of two popular models for the hyperbolic space, namely, the Poincar´e ball and the Lorentz model and identifies one Euclidean parametrization of the hyperBolic space which can alleviate these limitations.

References

SHOWING 1-10 OF 55 REFERENCES

ImageNet: A large-scale hierarchical image database

A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.

Energy-based Out-of-distribution Detection

This work proposes a unified framework for OOD detection that uses an energy score, and shows that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores.

Hyperbolic Image Embeddings

It is demonstrated that in many practical scenarios, hyperbolic embeddings provide a better alternative to linear hyperplanes, Euclidean distances, or spherical geodesic distances.

Hyperbolic Neural Networks

This work combines the formalism of Mobius gyrovector spaces with the Riemannian geometry of the Poincare model of hyperbolic spaces to derivehyperbolic versions of important deep learning tools: multinomial logistic regression, feed-forward and recurrent neural networks such as gated recurrent units.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Hyperbolic Graph Neural Networks

A novel GNN architecture for learning representations on Riemannian manifolds with differentiable exponential and logarithmic maps is proposed and a scalable algorithm for modeling the structural properties of graphs is developed, comparing Euclidean and hyperbolic geometry.

Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry

It is shown that an embedding in hyperbolic space can reveal important aspects of a company's organizational structure as well as reveal historical relationships between language families.

Poincaré Embeddings for Learning Hierarchical Representations

This work introduces a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincare ball -- and introduces an efficient algorithm to learn the embeddings based on Riemannian optimization.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Learning Hyperbolic Representations for Unsupervised 3D Segmentation

This work proposes learning effective representations of 3D patches for unsupervised segmentation through a variational autoencoder (VAE) with a hyperbolic latent space and a proposed gyroplane convolutional layer, which better models the underlying hierarchical structure within a 3D image.
...