Do CNNs Encode Data Augmentations?

  title={Do CNNs Encode Data Augmentations?},
  author={Eddie Q. Yan and Yan-Ping Huang},
  journal={2021 International Joint Conference on Neural Networks (IJCNN)},
  • Eddie Q. Yan, Y. Huang
  • Published 2021
  • Computer Science, Engineering, Mathematics
  • 2021 International Joint Conference on Neural Networks (IJCNN)
Data augmentations are important ingredients in the recipe for training robust neural networks, especially in computer vision. A fundamental question is whether neural network features encode data augmentation transformations. To answer this question, we introduce a systematic approach to investigate which layers of neural networks are the most predictive of augmentation transformations. Our approach uses features in pre-trained vision models with minimal additional processing to predict common… Expand
2 Citations

Figures and Tables from this paper

On the Scale Invariance in State of the Art CNNs Trained on ImageNet
It is shown that the presence of scale information at intermediate layers legitimates transfer learning in applications that require scale covariance rather than invariance and that the performance on these tasks can be improved by pruning off the layers where the invariance is learned. Expand


Understanding Data Augmentation for Classification: When to Warp?
It is found that while it is possible to perform generic augmentation in feature-space, if plausible transforms for the data are known then augmentationIn data-space provides a greater benefit for improving performance and reducing overfitting. Expand
Improved Regularization of Convolutional Neural Networks with Cutout
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Expand
Learning Deep Features for Discriminative Localization
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization abilityExpand
Visualizing and Understanding Convolutional Networks
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Visualizing Higher-Layer Features of a Deep Network
This paper contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets, and shows that good qualitative interpretations of high level features represented by such models are possible at the unit level. Expand
Scale-Equivariant Neural Networks with Decomposed Convolutional Filters
Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size. Expand
How Much Position Information Do Convolutional Neural Networks Encode?
A comprehensive set of experiments show the validity of the hypothesis that deep CNNs implicitly learn to encode absolute position information and shed light on how and where this information is represented while offering clues to where positional information is derived from in deepCNNs. Expand
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand
Scale-Equivariant Steerable Networks
This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters. Expand
Group Equivariant Convolutional Networks
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST. Expand