Corpus ID: 202889315

B-Spline CNNs on Lie Groups

@article{Bekkers2020BSplineCO,
  title={B-Spline CNNs on Lie Groups},
  author={E. Bekkers},
  journal={ArXiv},
  year={2020},
  volume={abs/1909.12057}
}
  • E. Bekkers
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
Group convolutional neural networks (G-CNNs) can be used to improve classical CNNs by equipping them with the geometric structure of groups. Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations, in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory. Currently, however, the practical… Expand

Figures and Tables from this paper

Spin-Weighted Spherical CNNs
TLDR
A new type of spherical CNN is presented that allows anisotropic filters in an efficient way, without ever leaving the spherical domain, to consider spin-weighted spherical functions, which were introduced in physics in the study of gravitational waves. Expand
Learning Equivariant Representations
TLDR
This thesis proposes equivariant models for different transformations defined by groups of symmetries, and extends equivariance to other kinds of transformations, such as rotation and scaling. Expand
A Dynamic Group Equivariant Convolutional Networks for Medical Image Analysis
  • Y. Li, Guitao Cao, Wenming Cao
  • Computer Science
  • 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
  • 2020
TLDR
This paper proposes a generalization of the dynamic Convolutional method, named as dynamic group equivariant convolution, to strengthen the relationships and increase model capability by aggregating multiple group convolutional kernels via attention, and demonstrates that breast tumor classification is substantial improvements when compared to a recent baseline architecture. Expand
Local Rotation Invariance in 3D CNNs
TLDR
Several methods to obtain LRI CNNs with directional sensitivity are proposed and compared, showing the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation. Expand
Roto-Translation Equivariant Convolutional Networks: Application to Histopathology Image Analysis
TLDR
A framework to encode the geometric structure of the special Euclidean motion group SE(2) in convolutional networks to yield translation and rotation equivariance via the introduction of SE( 2)-group convolution layers is proposed. Expand
Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey
TLDR
This survey tries to give a concise overview about different approaches to incorporate geometrical prior knowledge into DNNs, and tries to connect those methods to the field of 3D object detection for autonomous driving, where they expect promising results applying those methods. Expand
3D Solid Spherical Bispectrum CNNs for Biomedical Texture Analysis
TLDR
The results indicate that bispectrum CNNs allows for a significantly better characterization of 3D textures than both the spectral and standard CNN, and can efficiently learn with fewer training examples and trainable parameters when compared to a standard convolutional layer. Expand
Group Equivariant Conditional Neural Processes
TLDR
A decomposition theorem for permutation-invariant and group-equivariant maps is given, which leads to construct EquivCNPs with an infinite-dimensional latent space to handle group symmetries and shows that EquivCNP with translation equivariance achieves comparable performance to conventional CNPs in a 1D regression task. Expand
A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups
TLDR
This work provides a completely general algorithm for solving for the equivariant layers of matrix groups and constructs multilayer perceptrons equivariants to multiple groups that have never been tackled before, including the Rubik’s cube group. Expand
A Principal Component Analysis Approach for Embedding Local Symmetries into Deep Learning Algorithms
TLDR
This paper introduces a convenient methodology for embedding local Lie groups symmetries into Deep Learning algorithms by performing a Principal Component Analysis on the corresponding Tangent Covariance Matrix. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 65 REFERENCES
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
TLDR
This work presents Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes, that is a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. Expand
Deformable Convolutional Networks
TLDR
This work introduces two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling, based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. Expand
Gauge Equivariant Convolutional Networks and the Icosahedral CNN
TLDR
Gauge equivariant convolution using a single conv2d call is demonstrated, making it a highly scalable and practical alternative to Spherical CNNs and demonstrating substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns. Expand
Deep Symmetry Networks
TLDR
Deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups that uses kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension are introduced. Expand
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST. Expand
Spherical CNNs
TLDR
A definition for the spherical cross-correlation is proposed that is both expressive and rotation-equivariant and satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. Expand
Learning Steerable Filters for Rotation Equivariant CNNs
TLDR
Steerable Filter CNNs (SFCNNs) are developed which achieve joint equivariance under translations and rotations by design and generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Expand
Learning SO(3) Equivariant Representations with Spherical CNNs
TLDR
It is shown that networks with much lower capacity and without requiring data augmentation can exhibit performance comparable to the state of the art in standard 3D shape retrieval and classification benchmarks. Expand
3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data
TLDR
The experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry. Expand
3D object classification and retrieval with Spherical CNNs
TLDR
A model that aims to be efficient in both the number of learnable parameters and input size is presented, using the group convolution equivariance properties and the spherical convolution properties to build a network that learns feature maps equivariant to SO(3) actions by design. Expand
...
1
2
3
4
5
...