Corpus ID: 222125298

Group Equivariant Stand-Alone Self-Attention For Vision

@article{Romero2021GroupES,
  title={Group Equivariant Stand-Alone Self-Attention For Vision},
  author={David W. Romero and Jean-Baptiste Cordonnier},
  journal={ArXiv},
  year={2021},
  volume={abs/2010.00977}
}
We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups. This is achieved by defining positional encodings that are invariant to the action of the group considered. Since the group acts on the positional encoding directly, group equivariant self-attention networks (GSA-Nets) are steerable by nature. Our experiments on vision benchmarks demonstrate consistent improvements of GSA-Nets over non-equivariant self-attention networks. 

Figures and Tables from this paper

CKConv: Continuous Kernel Convolution For Sequential Data
TLDR
Conventional neural architectures for sequential data present important limitations and can be solved by formulating convolutional kernels in CNNs as continuous functions, which allows us to model arbitrarily long sequences in a parallel manner, within a single operation, and without relying on any form of recurrence. Expand
6DCNN with roto-translational convolution filters for volumetric data processing
In this work, we introduce 6D Convolutional Neural Network (6DCNN) designed to tackle the problem of detecting relative positions and orientations of local patterns when processing threedimensionalExpand
Beyond permutation equivariance in graph networks
We introduce a novel architecture for graph networks which is equivariant to the Euclidean group in n-dimensions, and is additionally able to deal with affine transformations. Our model is designedExpand
DISCO: accurate Discrete Scale Convolutions
TLDR
This work aims for accurate scale-equivariant convolutional neural networks (SE-CNNs) applicable for problems where high granularity of scale and small filter sizes are required, and derives general constraints under which scale-convolution remains equivariant to discrete rescaling. Expand
E(n) Equivariant Graph Neural Networks
TLDR
A new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)Equivariant Graph Neural Networks (EGNNs) is introduced, which does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance. Expand
Equivariant Networks for Pixelized Spheres
TLDR
This paper shows how to model the interplay between the two levels of symmetry transformations using ideas from group theory, identify the equivariant linear maps, and introduce equivariants padding that respects these symmetries. Expand
LieTransformer: Equivariant self-attention for Lie Groups
TLDR
The LieTransformer is proposed, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups that are competitive to baseline methods on a wide range of tasks. Expand
Symmetry-driven graph neural networks
TLDR
Two graph network architectures that are equivariant to several types of transformations affecting the node coordinates are introduced that can be vastly more data efficient with respect to classical graph architectures, intrinsically equipped with a better inductive bias and better at generalising. Expand
Universal Approximation of Functions on Sets
TLDR
A theoretical analysis of Deep Sets is provided which shows that this universal approximation property is only guaranteed if the model’s latent space is sufficiently high-dimensional, and indicates that Deep Sets may be viewed as the most efficient incarnation of the Janossy pooling paradigm. Expand

References

SHOWING 1-10 OF 56 REFERENCES
Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring In Data
TLDR
This work modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. Expand
Equivariance Through Parameter-Sharing
TLDR
This work shows that ϕW is equivariant with respect to G-action iff G explains the symmetries of the network parameters W, and proposes two parameter-sharing schemes to induce the desirable symmetry on W. Expand
General E(2)-Equivariant Steerable CNNs
TLDR
The theory of Steerable CNNs yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces, and it is shown that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. Expand
Group Equivariant Capsule Networks
TLDR
The group equivariant capsule networks are presented, a framework to introduce guaranteed equivariance and invariance properties to the capsule network idea and are able to combine the strengths of both approaches in one deep neural network architecture. Expand
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST. Expand
Scale-Equivariant Steerable Networks
TLDR
This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters. Expand
Stand-Alone Self-Attention in Vision Models
TLDR
The results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox and is especially impactful when used in later layers. Expand
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups
TLDR
It is proved that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Expand
SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
TLDR
The SE(3)-Transformer is introduced, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations, which achieves competitive performance on two real-world datasets, ScanObjectNN and QM9. Expand
RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks
TLDR
The RotDCF framework can be extended to groups other than rotations, providing a general approach which achieves both group equivariance and representation stability at a reduced model size. Expand
...
1
2
3
4
5
...