Improving the Sample-Complexity of Deep Classification Networks with Invariant Integration

@article{Rath2022ImprovingTS,
  title={Improving the Sample-Complexity of Deep Classification Networks with Invariant Integration},
  author={Matthias Rath and Alexandru Condurache},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.03967}
}
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks. This makes them applicable to practically important use-cases where training data is scarce. Rather than being learned, this knowledge can be embedded by enforcing invariance to those transformations. Invariance can be imposed using group-equivariant convolutions followed by a pooling operation. For rotation-invariance, previous work… 

Figures and Tables from this paper

Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs

Interspace pruning (IP) is introduced, a general tool to improve existing pruning methods that greatly exceeds SP with equal runtime and parameter costs and is shown that advances of IP are due to improved trainability and superior generalization ability.

Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey

This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training and proposes dimensionality reduced training as an underlying mathematical model that covers pruning and freezing during training.

Equivariant 3D-Conditional Diffusion Models for Molecular Linker Design

The proposed DiffLinker, an E(3)-equivariant 3D-conditional diffusion model for molecular linker design, demonstrates that it outperforms other methods on the standard datasets generating more diverse and synthetically-accessible molecules.

References

SHOWING 1-10 OF 59 REFERENCES

TI-POOLING: Transformation-Invariant Pooling for Feature Learning in Convolutional Neural Networks

A deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING) that is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes is presented.

Improved Regularization of Convolutional Neural Networks with Cutout

This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.

Locally Scale-Invariant Convolutional Neural Networks

A simple model is presented that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters, and is shown on a modified MNIST dataset that when faced with scale variation, building in scale-Invariance allows Conv net to learn more discriminative features with reduced chances of over-fitting.

SNIP: Single-shot Network Pruning based on Connection Sensitivity

This work presents a new approach that prunes a given network once at initialization prior to training, and introduces a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task.

Harmonic Networks: Deep Translation and Rotation Equivariance

H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization.

Learning Steerable Filters for Rotation Equivariant CNNs

Steerable Filter CNNs (SFCNNs) are developed which achieve joint equivariance under translations and rotations by design and generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters.

Spatial Transformer Networks

This work introduces a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network, and can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps.

Scale Steerable Filters for Locally Scale-Invariant Convolutional Neural Networks

A scale-steerable filter basis for the locally scale-invariant CNN, denoted as log-radial harmonics is proposed, which shows on-par generalization to global affine transformation estimation methods such as Spatial Transformers, in response to test-time data distortions.

Rotation Equivariant Vector Field Networks

The Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network architecture encoding rotation equivariance, invariance and covariance, is proposed and a modified convolution operator relying on this representation to obtain deep architectures is developed.

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

The results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, they achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features.
...