Regularized evolutionary population-based training

@article{Liang2021RegularizedEP,
  title={Regularized evolutionary population-based training},
  author={Jason Liang and Santiago Gonzalez and Hormoz Shahrzad and Risto Miikkulainen},
  journal={Proceedings of the Genetic and Evolutionary Computation Conference},
  year={2021}
}
Metalearning of deep neural network (DNN) architectures and hyperparameters has become an increasingly important area of research. At the same time, network regularization has been recognized as a crucial dimension to effective training of DNNs. However, the role of metalearning in establishing effective regularization has not yet been fully explored. There is recent evidence that loss-function optimization could play this role, however it is computationally impractical as an outer loop to full… 

Figures and Tables from this paper

Evolution of neural networks

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial

Evaluating medical aesthetics treatments through evolved age-estimation models

The study shows how AI can be harnessed in a new role: to provide an objective quantitative measure of a subjective perception, in this case the proposed effectiveness of medical aesthetics treatments.

Evolution of neural networks

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial

References

SHOWING 1-10 OF 55 REFERENCES

Population Based Training of Neural Networks

Population Based Training is presented, a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance.

Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization

This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements, and constitutes an important step towards AutoML.

Designing neural networks through neuroevolution

This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search.

Regularized Evolution for Image Classifier Architecture Search

This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset.

CMA-ES for Hyperparameter Optimization of Deep Neural Networks

This work proposes to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization, for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.

How Does Learning Rate Decay Help Modern Neural Networks

This work provides another novel explanation of how lrDecay works: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns.

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.

Evolving Loss Functions with Multivariate Taylor Polynomial Parameterizations

Multivariate Taylor expansion-based genetic loss-function optimization (TaylorGLO) represents functions using a novel parameterization based on Taylor expansions, making the search more effective and demonstrating that loss function optimization is a productive avenue for metalearning.

On Loss Functions for Deep Neural Networks in Classification

This paper investigates how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects, and shows that L1 and L2 losses are justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification.
...