• Corpus ID: 141477988

Fast AutoAugment

@inproceedings{Lim2019FastA,
  title={Fast AutoAugment},
  author={Sungbin Lim and Ildoo Kim and Taesup Kim and Chiheon Kim and Sungwoong Kim},
  booktitle={NeurIPS},
  year={2019}
}
Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation… 

Figures and Tables from this paper

Faster AutoAugment: Learning Augmentation Strategies using Backpropagation
TLDR
This paper proposes a differentiable policy search pipeline for data augmentation, which achieves significantly faster searching than prior work without a performance drop and introduces approximate gradients for several transformation operations with discrete parameters.
Hypernetwork-Based Augmentation
TLDR
This paper proposes an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA), which simultaneously learns model parameters and augmentation hyperparameters in a single training, and introduces a weight sharing strategy that simplifies the hypernetwork architecture and speeds up the search algorithm.
Direct Differentiable Augmentation Search
TLDR
This paper proposes an efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS), which exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search.
Meta Approach to Data Augmentation Optimization
TLDR
This paper proposes to optimize image recognition models and data augmentation policies simultaneously to improve the performance using gradient descent, and achieves efficient and scalable training by approximating the gradient of policies by implicit gradient with Neumann series approximation.
A Survey of Automated Data Augmentation Algorithms for Deep Learning-based Image Classication Tasks
TLDR
This survey discusses the underlying reasons of the emergence of AutoDA technology from the perspective of image classification, and identifies three key components of a standard AutoDA model: a search space, a search algorithm and an optimal DA policies.
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
TLDR
This paper empirically shows that the standard data augmentation methods may introduce distribution shift and consequently hurt the performance on unaugmented data during inference, and proposes a simple yet effective approach, dubbed KeepAugment, to increase the fidelity of augmented images.
UniformAugment: A Search-free Probabilistic Data Augmentation Approach
TLDR
This paper shows that, under the assumption that the augmentation space is approximately distribution invariant, a uniform sampling over the continuous space of augmentation transformations is sufficient to train highly effective models and proposes UniformAugment, an automated data augmentation approach that completely avoids a search phase.
A Comprehensive Survey of Image Augmentation Techniques for Deep Learning
TLDR
A comprehensive survey on image augmentation for deep learning with a novel informative taxonomy is performed and it is believed that a better understanding helpful to choose suitable methods or design novel algorithms for practical applications is given.
AutoDO: Robust AutoAugment for Biased Data with Label Noise via Scalable Probabilistic Implicit Differentiation
TLDR
This work reformulates AutoAugment as a generalized automated dataset optimization (AutoDO) task that minimizes the distribution shift between test data and distorted train dataset, and develops a theoretical probabilistic interpretation of this framework using Fisher information and shows that its complexity scales linearly with the dataset size.
Safe Augmentation: Learning Task-Specific Transformations from Data
TLDR
This work proposes a simple novel method that can automatically learn task-specific data augmentation techniques called safe augmentations that do not break the data distribution and can be used to improve model performance.
...
...

References

SHOWING 1-10 OF 43 REFERENCES
AutoAugment: Learning Augmentation Strategies From Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
TLDR
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task.
Smart Augmentation Learning an Optimal Data Augmentation Strategy
TLDR
Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss, and allows to learn augmentations that minimize the error of that network.
Shake-Shake regularization
The method introduced in this paper aims at helping deep learning practitioners faced with an overfit problem. The idea is to replace, in a multi-branch network, the standard summation of parallel
A Bayesian Data Augmentation Approach for Learning Deep Models
TLDR
A novel Bayesian formulation to data augmentation is provided, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set, and this approach produces better classification results than similar GAN models.
Rethinking the Inception Architecture for Computer Vision
TLDR
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Learning Transferable Architectures for Scalable Image Recognition
TLDR
This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models.
Data Augmentation with Manifold Exploring Geometric Transformations for Increased Performance and Robustness
TLDR
A novel augmentation technique that improves not only the performance of deep neural networks on clean test data, but also significantly increases their robustness to random transformations, both affine and projective.
Deep Pyramidal Residual Networks
TLDR
This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture.
...
...