Corpus ID: 153312991

Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules

@article{Ho2019PopulationBA,
  title={Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules},
  author={Daniel Ho and Eric Liang and Ion Stoica and P. Abbeel and Xi Chen},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.05393}
}
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates… Expand
MetaAugment: Sample-Aware Data Augmentation Policy Learning
TLDR
This paper theoretically prove the convergence of the training procedure and further derive the exact convergence rate and learns a sample-aware data augmentation policy efficiently by formulating it as a sample reweighting problem. Expand
Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
TLDR
A new (provably) efficient hierarchical approach for optimizing both continuous and categorical variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. Expand
Direct Differentiable Augmentation Search
TLDR
An efficient differentiable search algorithm called Direct Differentiable Augmentation Search (DDAS), which exploits meta-learning with one-step gradient update and continuous relaxation to the expected training loss for efficient search, and organizes the search space into a two level hierarchy. Expand
Adversarial AutoAugment
TLDR
An adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimize target related object and augmentation policy search loss and demonstrate significant performance improvements over state-of-the-art. Expand
Greedy AutoAugment
TLDR
This paper proposes Greedy AutoAugment as a highly efficient search algorithm to find the best augmentation policies and uses a greedy approach to reduce the exponential growth of the number of possible trials to linear growth. Expand
DHA: End-to-End Joint Optimization of Data Augmentation Policy, Hyper-parameter and Architecture
  • Kaichen Zhou, Lanqing Hong, +4 authors Zhenguo Li
  • Computer Science
  • ArXiv
  • 2021
TLDR
DHA is the first to efficiently and jointly optimize DA policy, NAS, and HPO in an end-to-end manner without retraining and achieves state-of-the-art (SOTA) results on various datasets. Expand
DivAug: Plug-in Automated Data Augmentation with Explicit Diversity Maximization
TLDR
An unsupervised sampling-based framework, DivAug, is designed to directly maximize Variance Diversity and hence strengthen the regularization effect of data augmentation and can further improve the performance of semi-supervised learning algorithms compared to RandAugment, making it highly applicable to realworld problems, where labeled data is scarce. Expand
FASTER AND SMARTER AUTOAUGMENT : AUGMENTATION POLICY SEARCH BASED ON DYNAMIC DATA-CLUSTERING
  • 2020
Data augmentation tuned to datasets and tasks has had great success in various AI applications, such as computer vision, natural language processing, autonomous driving, and bioinformatics. However,Expand
Data augmentation as stochastic optimization
TLDR
A time-varying Monro-Robbins theorem with rates of convergence is proved which gives conditions on the learning rate and augmentation schedule under which augmented gradient descent converges. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
Smart Augmentation Learning an Optimal Data Augmentation Strategy
TLDR
Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss, and allows to learn augmentations that minimize the error of that network. Expand
AutoAugment: Learning Augmentation Policies from Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data). Expand
Proximal Policy Optimization Algorithms
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objectiveExpand
Efficient Hyperparameter Optimization and Infinitely Many Armed Bandits
TLDR
This work introduces Hyperband for hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. Expand
A Bayesian Data Augmentation Approach for Learning Deep Models
TLDR
A novel Bayesian formulation to data augmentation is provided, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set, and this approach produces better classification results than similar GAN models. Expand
Efficient Neural Architecture Search via Parameter Sharing
TLDR
Efficient Neural Architecture Search is a fast and inexpensive approach for automatic model design that establishes a new state-of-the-art among all methods without post-training processing and delivers strong empirical performances using much fewer GPU-hours. Expand
Neural Architecture Search with Reinforcement Learning
TLDR
This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. Expand
Data Augmentation Generative Adversarial Networks
TLDR
It is shown that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well and can enhance few-shot learning systems such as Matching Networks. Expand
SGDR: Stochastic Gradient Descent with Restarts
TLDR
This paper proposes a simple restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks and empirically study its performance on CIFar-10 and CIFAR-100 datasets. Expand
Learning to Compose Domain-Specific Transformations for Data Augmentation
TLDR
The proposed method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data, which can be used to perform data augmentation for any end discriminative model. Expand
...
1
2
3
4
5
...