Learned Deformation Stability in Convolutional Neural Networks
@article{Ruderman2018LearnedDS, title={Learned Deformation Stability in Convolutional Neural Networks}, author={Avraham Ruderman and Neil C. Rabinowitz and Ari S. Morcos and Daniel Zoran}, journal={ArXiv}, year={2018}, volume={abs/1804.04438} }
Conventional wisdom holds that interleaved pooling layers in convolutional neural networks lead to stability to small translations and deformations. In this work, we investigate this claim empirically. We find that while pooling confers stability to deformation at initialization, the deformation stability at each layer changes significantly over the course of training and even decreases in some layers, suggesting that deformation stability is not unilaterally helpful. Surprisingly, after…
9 Citations
Structure-Aware Convolutional Neural Networks
- Computer ScienceNeurIPS
- 2018
By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established and there is strong evidence that SACnns outperform current models on various machine learning tasks, including image classification and clustering.
Adaloss: Adaptive Loss Function for Landmark Localization
- Computer ScienceArXiv
- 2019
This paper introduces "Adaloss", an objective function that adapts itself during the training by updating the target precision based on the training statistics and shows improved stability in training and better localization accuracy during inference.
Opening the black box of deep learning
- Computer Science, PhysicsArXiv
- 2018
This dissertation proposes that the neural network of deep learning is a physical system, examines deep learning from three different perspectives: microscopic, macroscopic, and physical world views, and answers multiple theoretical puzzles in deep learning by using physics principles.
Aligned to the Object, Not to the Image: A Unified Pose-Aligned Representation for Fine-Grained Recognition
- Computer Science2019 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2019
An algorithm that performs pose estimation and forms the unified object representation as the concatenation of pose-aligned region features, which is then fed into a classification network achieves state-of-the-art results on two fine-grained datasets.
Understanding Convolutional Neural Networks for Text Classification
- Computer ScienceBlackboxNLP@EMNLP
- 2018
An analysis into the inner workings of Convolutional Neural Networks for processing text shows that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important n grams from the rest.
Embedding Physical Augmentation and Wavelet Scattering Transform to Generative Adversarial Networks for Audio Classification with Limited Training Resources
- Computer ScienceICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2019
A novel GAN method is proposed to embed physical augmentation and wavelet scattering transform in processing to lift up classification accuracy when training with limited resources.
Taxonomy, state-of-the-art, challenges and applications of visual understanding: A review
- Computer Science, ArtComput. Sci. Rev.
- 2021
Deep insight: Convolutional neural network and its applications for COVID-19 prognosis
- Medicine, Computer ScienceBiomedical Signal Processing and Control
- 2021
Application of Computer Vision and Deep Learning in Breast Cancer Assisted Diagnosis
- Computer ScienceICMLSC 2019
- 2019
By using artificial intelligence technology such as computer vision and in-depth learning, an automated method is established to diagnose breast cancer B-mode ultrasound images and can quickly strengthen the correct diagnostic rate of front-line medical staff and reduce the difference of operation level between urban and rural doctors.
References
SHOWING 1-10 OF 34 REFERENCES
Understanding deep learning requires rethinking generalization
- Computer ScienceICLR
- 2017
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.
Geometric Robustness of Deep Networks: Analysis and Improvement
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks and builds on it to propose a new adversarial training scheme and show its effectiveness on improving the invariances properties of deep neural networks.
On the importance of single directions for generalization
- Computer ScienceICLR
- 2018
It is found that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.
Group Invariance and Stability to Deformations of Deep Convolutional Representations
- Mathematics, Computer ScienceArXiv
- 2017
It is shown that the signal representation is stable, and that models from this functional space, such as a large class of convolutional neural networks with homogeneous activation functions, may enjoy the same stability.
Striving for Simplicity: The All Convolutional Net
- Computer ScienceICLR
- 2015
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Understanding image representations by measuring their equivariance and equivalence
- Computer Science2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015
Three key mathematical properties of representations: equivariance, invariance, and equivalence are investigated and applied to popular representations to reveal insightful aspects of their structure, including clarifying at which layers in a CNN certain geometric invariances are achieved.
On the Emergence of Invariance and Disentangling in Deep Representations
- Computer ScienceArXiv
- 2017
It is shown that invariance in a deep neural network is equivalent to minimality of the representation it computes, and can be achieved by stacking layers and injecting noise in the computation, under realistic and empirically validated assumptions.
Manitest: Are classifiers really invariant?
- Computer Science, MathematicsBMVC
- 2015
The Manitest method is proposed, built on the efficient Fast Marching algorithm to compute the invariance of classifiers, which quantifies in particular the importance of data augmentation for learning invariance from data, and the increased invariances of convolutional neural networks with depth.
Very Deep Convolutional Networks for Large-Scale Image Recognition
- Computer ScienceICLR
- 2015
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Deep Residual Learning for Image Recognition
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.