Learning Steerable Filters for Rotation Equivariant CNNs

@article{Weiler2017LearningSF,
  title={Learning Steerable Filters for Rotation Equivariant CNNs},
  author={Maurice Weiler and Fred A. Hamprecht and Martin Storath},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2017},
  pages={849-858}
}
In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable… 

Figures and Tables from this paper

RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks

The RotDCF framework can be extended to groups other than rotations, providing a general approach which achieves both group equivariance and representation stability at a reduced model size.

Implicit Equivariance in Convolutional Networks

The proposed Implicitly Equivariant Networks (IEN) which induce equivariant in the different layers of a standard CNN model by optimizing a multi-objective loss function that combines the primary loss with an equivariance loss term is validated.

Nonlinearities in Steerable SO(2)-Equivariant CNNs

This paper develops a novel FFT-based algorithm for computing representations of non-linearly transformed activations while maintaining band-limitation and obtains results that compare favorably to the state-of-the-art in terms of accuracy while permitting continuous symmetry and exact equivariance.

Scale Steerable Filters for Locally Scale-Invariant Convolutional Neural Networks

A scale-steerable filter basis for the locally scale-invariant CNN, denoted as log-radial harmonics is proposed, which shows on-par generalization to global affine transformation estimation methods such as Spatial Transformers, in response to test-time data distortions.

Scale-Equivariant Steerable Networks

This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters.

Scale Equivariant CNNs with Scale Steerable Filters

A scale equivariat network is built with the usage of scale steerable filters and improves the perfromance about 2% over other comparable methods of scale equivariance and scale invariance, when run on the FMNIST-scale dataset.

Deformation Robust Roto-Scale-Translation Equivariant CNNs

A roto-scale-translation equivariant CNN ( RST -CNN), that is guaranteed to achieve equivariance jointly over these three groups via coupled group convolutions, is presented.

General E(2)-Equivariant Steerable CNNs

The theory of Steerable CNNs yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces, and it is shown that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations.

Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images

Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework are proposed that achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology.

Efficient Equivariant Network

This work proposes a general framework of previous equivariant models, which includes G-CNNs andEquivariant self-attention layers as special cases, and explicitly decomposes the feature aggregation operation into a kernel generator and an encoder, and decouple the spatial and extra geometric dimensions in the computation.
...

References

SHOWING 1-10 OF 33 REFERENCES

Harmonic Networks: Deep Translation and Rotation Equivariance

H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization.

Rotation Equivariant Vector Field Networks

The Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network architecture encoding rotation equivariance, invariance and covariance, is proposed and a modified convolution operator relying on this representation to obtain deep architectures is developed.

Dynamic Steerable Blocks in Deep Residual Networks

This work investigates the generalized notion of frames designed with image properties in mind, as alternatives to this parametrization, and shows that frame-based ResNets and Densenets can improve performance on Cifar-10+ consistently, while having additional pleasant properties like steerability.

Warped Convolutions: Efficient Invariance to Spatial Transformations

This work presents a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy, consisting of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes.

Steerable CNNs

This paper presents Steerable Convolutional Neural Networks, an efficient and flexible class of equivariant convolutional networks, and shows how the parameter cost of a steerable filter bank depends on the types of the input and output features.

Deep Symmetry Networks

Deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups that uses kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension are introduced.

Learning rotation-aware features: From invariant priors to equivariant descriptors

  • Uwe SchmidtS. Roth
  • Computer Science
    2012 IEEE Conference on Computer Vision and Pattern Recognition
  • 2012
This paper describes a general framework for incorporating invariance to linear image transformations into product models for feature learning and shows the advantages of this approach in learning rotation-invariant image priors and in building rotation-equivariant and invariant descriptors of learned features.

Exploiting Cyclic Symmetry in Convolutional Neural Networks

This work introduces four operations which can be inserted into neural network models as layers, andWhich can be combined to make these models partially equivariant to rotations, and which enable parameter sharing across different orientations.

TI-POOLING: Transformation-Invariant Pooling for Feature Learning in Convolutional Neural Networks

A deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING) that is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes is presented.

Group Equivariant Convolutional Networks

Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST.