• Corpus ID: 4807207

Learned Deformation Stability in Convolutional Neural Networks

@article{Ruderman2018LearnedDS,
  title={Learned Deformation Stability in Convolutional Neural Networks},
  author={Avraham Ruderman and Neil C. Rabinowitz and Ari S. Morcos and Daniel Zoran},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.04438}
}
Conventional wisdom holds that interleaved pooling layers in convolutional neural networks lead to stability to small translations and deformations. In this work, we investigate this claim empirically. We find that while pooling confers stability to deformation at initialization, the deformation stability at each layer changes significantly over the course of training and even decreases in some layers, suggesting that deformation stability is not unilaterally helpful. Surprisingly, after… 

Figures from this paper

Structure-Aware Convolutional Neural Networks
TLDR
By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established and there is strong evidence that SACnns outperform current models on various machine learning tasks, including image classification and clustering.
Adaloss: Adaptive Loss Function for Landmark Localization
TLDR
This paper introduces "Adaloss", an objective function that adapts itself during the training by updating the target precision based on the training statistics and shows improved stability in training and better localization accuracy during inference.
Opening the black box of deep learning
TLDR
This dissertation proposes that the neural network of deep learning is a physical system, examines deep learning from three different perspectives: microscopic, macroscopic, and physical world views, and answers multiple theoretical puzzles in deep learning by using physics principles.
Aligned to the Object, Not to the Image: A Unified Pose-Aligned Representation for Fine-Grained Recognition
  • Pei Guo, Ryan Farrell
  • Computer Science
    2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2019
TLDR
An algorithm that performs pose estimation and forms the unified object representation as the concatenation of pose-aligned region features, which is then fed into a classification network achieves state-of-the-art results on two fine-grained datasets.
Understanding Convolutional Neural Networks for Text Classification
TLDR
An analysis into the inner workings of Convolutional Neural Networks for processing text shows that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important n grams from the rest.
Embedding Physical Augmentation and Wavelet Scattering Transform to Generative Adversarial Networks for Audio Classification with Limited Training Resources
  • Teh Kah Kuan, Tran Huy Dat
  • Computer Science
    ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2019
TLDR
A novel GAN method is proposed to embed physical augmentation and wavelet scattering transform in processing to lift up classification accuracy when training with limited resources.
Deep insight: Convolutional neural network and its applications for COVID-19 prognosis
Application of Computer Vision and Deep Learning in Breast Cancer Assisted Diagnosis
TLDR
By using artificial intelligence technology such as computer vision and in-depth learning, an automated method is established to diagnose breast cancer B-mode ultrasound images and can quickly strengthen the correct diagnostic rate of front-line medical staff and reduce the difference of operation level between urban and rural doctors.

References

SHOWING 1-10 OF 34 REFERENCES
Understanding deep learning requires rethinking generalization
TLDR
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.
Geometric Robustness of Deep Networks: Analysis and Improvement
TLDR
This work proposes ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks and builds on it to propose a new adversarial training scheme and show its effectiveness on improving the invariances properties of deep neural networks.
On the importance of single directions for generalization
TLDR
It is found that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.
Group Invariance and Stability to Deformations of Deep Convolutional Representations
TLDR
It is shown that the signal representation is stable, and that models from this functional space, such as a large class of convolutional neural networks with homogeneous activation functions, may enjoy the same stability.
Striving for Simplicity: The All Convolutional Net
TLDR
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Understanding image representations by measuring their equivariance and equivalence
  • Karel Lenc, A. Vedaldi
  • Computer Science
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
TLDR
Three key mathematical properties of representations: equivariance, invariance, and equivalence are investigated and applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved.
On the Emergence of Invariance and Disentangling in Deep Representations
TLDR
It is shown that invariance in a deep neural network is equivalent to minimality of the representation it computes, and can be achieved by stacking layers and injecting noise in the computation, under realistic and empirically validated assumptions.
Manitest: Are classifiers really invariant?
TLDR
The Manitest method is proposed, built on the efficient Fast Marching algorithm to compute the invariance of classifiers, which quantifies in particular the importance of data augmentation for learning invariance from data, and the increased invariances of convolutional neural networks with depth.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
...
1
2
3
4
...