Corpus ID: 236428766

Augmentation Pathways Network for Visual Recognition

@article{Bai2021AugmentationPN,
  title={Augmentation Pathways Network for Visual Recognition},
  author={Yalong Bai and Mo Zhou and Yuxiang Chen and Wei Zhang and Bowen Zhou and Tao Mei},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11990}
}
  • Yalong Bai, Mo Zhou, +3 authors Tao Mei
  • Published 2021
  • Computer Science
  • ArXiv
Data augmentation is practically helpful for visual recognition, especially at the time of data scarcity. However, such success is only limited to quite a few light augmentations (e.g., random crop, flip). Heavy augmentations (e.g., gray, grid shuffle) are either unstable or show adverse effects during training, owing to the big gap between the original and augmented images. This paper introduces a novel network design, noted as Augmentation Pathways (AP), to systematically stabilize training… Expand

References

SHOWING 1-10 OF 28 REFERENCES
AutoAugment: Learning Augmentation Strategies From Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data). Expand
Faster AutoAugment: Learning Augmentation Strategies using Backpropagation
TLDR
This paper proposes a differentiable policy search pipeline for data augmentation, which achieves significantly faster searching than prior work without a performance drop and introduces approximate gradients for several transformation operations with discrete parameters. Expand
Aggregated Residual Transformations for Deep Neural Networks
TLDR
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity. Expand
Learning robust visual representations using data augmentation invariance
TLDR
The results show that the proposed data augmentation invariance approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance. Expand
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
TLDR
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task. Expand
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Expand
Improved Residual Networks for Image and Video Recognition
TLDR
The proposed improvements address all three main components of a ResNet: the flow of information through the network layers, the residual building block, and the projection shortcut, and are able to show consistent improvements in accuracy and learning convergence over the baseline. Expand
Randaugment: Practical automated data augmentation with a reduced search space
TLDR
This work proposes a simplified search space that vastly reduces the computational expense of automated augmentation, and permits the removal of a separate proxy task. Expand
Fast AutoAugment
TLDR
This paper proposes an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching that speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
...
1
2
3
...