KeepAugment: A Simple Information-Preserving Data Augmentation Approach

@article{Gong2021KeepAugmentAS,
  title={KeepAugment: A Simple Information-Preserving Data Augmentation Approach},
  author={Chengyue Gong and Dilin Wang and Meng Li and Vikas Chandra and Qiang Liu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={1055-1064}
}
  • Chengyue Gong, Dilin Wang, +2 authors Qiang Liu
  • Published 23 November 2020
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems. In this paper, we empirically show that the standard data augmentation methods may introduce distribution shift and consequently hurt the performance on unaugmented data during inference. To alleviate this issue, we propose a simple yet effective approach, dubbed KeepAugment, to increase the fidelity of augmented images. The idea is to use the saliency map to detect important regions on the… Expand
Survey: Image Mixing and Deleting for Data Augmentation
TLDR
This paper empirically evaluates these approaches for image classification, fine-grained image recognition, and object detection where it is shown that this category of data augmentation improves the overall performance for deep neural networks. Expand
ASNet: Auto-Augmented Siamese Neural Network for Action Recognition
TLDR
This framework proposes backpropagating salient patches and randomly cropped samples in the same iteration to perform gradient compensation to alleviate the adverse gradient effects of non-informative samples. Expand
CutDepth: Edge-aware Data Augmentation in Depth Estimation
TLDR
Experiments objectively and subjectively show that the proposed method, called CutDepth, outperforms conventional methods of data augmentation in monocular depth estimation, and the estimation accuracy is improved even though there are few training data at long distances. Expand
TransMix: Attend to Mix for Vision Transformers
TLDR
TransMix is proposed, which mixes labels based on the attention maps of Vision Transformers, which can consistently improve various ViT-based models at scales on ImageNet classification. Expand
Patch AutoAugment
TLDR
A patch-level automatic DA algorithm called Patch AutoAugment (PAA), which allows each patch DA operation to be controlled by an agent and models it as a Multi-Agent Reinforcement Learning (MARL) problem. Expand
What augmentations are sensitive to hyper-parameters and why?
TLDR
This study evaluates the sensitivity of augmentations with regards to the model’s hyper parameters along with their consistency and influence by performing a Local Surrogate (LIME) interpretation on the impact of hyperparameters when different augmentations are applied to a machine learning model. Expand
Local Patch AutoAugment with Multi-Agent Collaboration
  • Shiqi Lin, Tao Yu, Ruoyu Feng, Xin Li, Xin Jin, Zhibo Chen
  • Computer Science
  • 2021
TLDR
This paper proposes a more fine-grained automated DA approach, dubbed Patch AutoAugment, to divide an image into a grid of patches and search for the joint optimal augmentation policies for the patches, which outperforms the state-of-theart DA methods while requiring fewer computational resources. Expand

References

SHOWING 1-10 OF 48 REFERENCES
RandAugment: Practical data augmentation with no separate search
TLDR
RandAugment can be used uniformly across different tasks and datasets and works out of the box, matching or surpassing all previous learned augmentation approaches on CIFAR-10, CIFar-100, SVHN, and ImageNet. Expand
AutoAugment: Learning Augmentation Policies from Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data). Expand
GridMask Data Augmentation
TLDR
This paper proposes a novel data augmentation method `GridMask', which is based on the deletion of regions of the input image and outperforms the latest AutoAugment, which is way more computationally expensive due to the use of reinforcement learning to find the best policies. Expand
Circumventing Outliers of AutoAugment with Knowledge Distillation
TLDR
It is revealed that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option, and knowledge distillation is made use that refers to the output of a teacher model to guide network training. Expand
Attentive Cutmix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification
TLDR
Attentive CutMix is proposed, a naturally enhanced augmentation strategy based on CutMix that consistently outperforms the baseline CutMix and other methods by a significant margin, and can boost the baseline significantly. Expand
Fast AutoAugment
TLDR
This paper proposes an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching that speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets. Expand
Unsupervised Data Augmentation
TLDR
UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods, which leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small. Expand
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. Expand
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
TLDR
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task. Expand
Adversarial AutoAugment
TLDR
An adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss and demonstrate significant performance improvements over state-of-the-art. Expand
...
1
2
3
4
5
...