Corpus ID: 61153622

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

@inproceedings{Cohen2019GaugeEC,
  title={Gauge Equivariant Convolutional Networks and the Icosahedral CNN},
  author={Taco Cohen and Maurice Weiler and Berkay Kicanaoglu and Max Welling},
  booktitle={ICML},
  year={2019}
}
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only… Expand
Gauge Equivariant Spherical CNNs
Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images. Over the last few years, a number ofExpand
CNNs on surfaces using rotation-equivariant features
TLDR
A network architecture for surfaces that consists of vector-valued, rotation-equivariant features that makes it possible to locally align features, which were computed in arbitrary coordinate systems, when aggregating features in a convolution layer. Expand
Generalization capabilities of translationally equivariant neural networks
TLDR
This work focuses on complex scalar field theory on a two-dimensional lattice and investigates the benefits of using group equivariant convolutional neural network architectures based on the translation group and demonstrates that in most of these tasks the best equivariants architectures can perform and generalize significantly better than their non-equivariant counterparts. Expand
General E(2)-Equivariant Steerable CNNs
TLDR
The theory of Steerable CNNs yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces, and it is shown that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. Expand
Scale-Equivariant Steerable Networks
TLDR
This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters. Expand
Learning Equivariant Representations
TLDR
This thesis proposes equivariant models for different transformations defined by groups of symmetries, and extends equivariance to other kinds of transformations, such as rotation and scaling. Expand
Geometric Deep Learning and Equivariant Neural Networks
TLDR
The mathematical foundations of geometric deep learning is surveyed, focusing on group equivariant and gaugeEquivariant neural networks and the use of Fourier analysis involving Wigner matrices, spherical harmonics and Clebsch–Gordan coefficients for G = SO(3), illustrating the power of representation theory for deep learning. Expand
Enabling equivariance for arbitrary Lie groups
TLDR
This work introduces a rigourous mathematical framework to permit invariance to any Lie group of warps, exclusively using convolutions (over Lie groups), without the need for capsules, and enables the implementation of group convolutions over any finite-dimensional Lie group. Expand
Frame Averaging for Invariant and Equivariant Network Design
Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural networkExpand
Theoretical Aspects of Group Equivariant Neural Networks
TLDR
This work begins with an exposition of group representation theory and the machinery necessary to define and evaluate integrals and convolutions on groups, and shows applications to recent SO(3) and SE( 3) equivariant networks, namely the Spherical CNNs, Clebsch-Gordan Networks, and 3D Steerable CNNs. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 63 REFERENCES
Learning Steerable Filters for Rotation Equivariant CNNs
TLDR
Steerable Filter CNNs (SFCNNs) are developed which achieve joint equivariance under translations and rotations by design and generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Expand
Spherical CNNs
TLDR
A definition for the spherical cross-correlation is proposed that is both expressive and rotation-equivariant and satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. Expand
DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications
TLDR
This work presents a spherical CNN for analysis of full and partial HEALPix maps, which it is shown that the performance of DeepSphere is always superior or equal to both of these baselines and shows how learned filters can be visualized to introspect the neural network. Expand
Rotation Equivariant Vector Field Networks
TLDR
The Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network architecture encoding rotation equivariance, invariance and covariance, is proposed and a modified convolution operator relying on this representation to obtain deep architectures is developed. Expand
Learning shape correspondence with anisotropic convolutional neural networks
TLDR
An intrinsic convolutional neural network architecture based on anisotropic diffusion kernels is introduced, which is term Anisotropic Convolutional Neural Network (ACNN), and is used to effectively learn intrinsic dense correspondences between deformable shapes in very challenging settings. Expand
Harmonic Networks: Deep Translation and Rotation Equivariance
TLDR
H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. Expand
Multi-directional geodesic neural networks via equivariant convolution
TLDR
This work defines directional convolution in the continuous setting, proves its key properties and shows how it can be implemented in practice, for shapes represented as triangle meshes, where it shows a significant improvement over several baselines. Expand
3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data
TLDR
The experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry. Expand
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST. Expand
Intertwiners between Induced Representations (with Applications to the Theory of Equivariant Neural Networks)
TLDR
G-CNNs are established as a universal class of equivariant network architectures on homogeneous spaces like Euclidean space or the sphere if and only if the input and output feature spaces transform according to an induced representation. Expand
...
1
2
3
4
5
...