DEHB: Evolutionary Hyberband for Scalable, Robust and Efficient Hyperparameter Optimization

@inproceedings{Awad2021DEHBEH,
  title={DEHB: Evolutionary Hyberband for Scalable, Robust and Efficient Hyperparameter Optimization},
  author={Noor H. Awad and Neeratyoy Mallik and Frank Hutter},
  booktitle={IJCAI},
  year={2021}
}
Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks… 
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
TLDR
This survey seeks to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization
TLDR
This paper proposes a set of methods that extend current approaches to jointly optimize neural architectures and hyperparameters with respect to multiple objectives and hopes that these methods will serve as simple baselines for future research on multi-objective joint NAS + HPO.
HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO
TLDR
HBPOBench is proposed, which includes 7 existing and 5 new benchmark families, with in total more than 100 multi-fidelity benchmark problems, and provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations.
Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges
TLDR
This work gives practical recommendations regarding important choices to be made when conducting HPO, including the HPO algorithms themselves, performance evaluation, how to combine HPO with machine learning pipelines, runtime improvements, and parallelization.
NAS-Bench-x11 and the Power of Learning Curves
TLDR
To extend the benefits of tabular NAS benchmarks to larger, more realistic NAS search spaces which cannot be evaluated exhaustively, it was proposed to construct surrogate NAS benchmarks, including NAS-Bench-301, which was created by training 60 000 architectures from the DARTS search space and then fitting a surrogate model which can be used to estimate the performance of all 10 architectures in the DARts search space.
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization
TLDR
SMAC3 offers a robust and flexible framework for Bayesian Optimization, which can improve performance within a few evaluations, and offers several facades and pre-sets for typical use cases, such as optimizing hyperparameters.

References

SHOWING 1-10 OF 39 REFERENCES
Regularized Evolution for Image Classifier Architecture Search
TLDR
This work evolves an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time and gives evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search.
Algorithms for Hyper-Parameter Optimization
TLDR
This work contributes novel techniques for making response surface models P(y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements.
Differential Evolution for Neural Architecture Search
TLDR
This paper comprehensively compare this search strategy to regularized evolution and Bayesian optimization and demonstrates that it yields improved and more robust results for 13 tabular NAS benchmarks based on NAS-Bench-101, NAS- Bench-1Shot1, NAS -Bench-201 and NAS-HPO bench.
Sequential Model-Based Optimization for General Algorithm Configuration
TLDR
This paper extends the explicit regression models paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances, and yields state-of-the-art performance.
NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search
TLDR
A general framework for one-shot NAS that can be instantiated to many recently-introduced variants and a general benchmarking framework that draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one- shot NAS methods are introduced.
Nas-bench102: Extending the scope of reproducible neural architecture search
  • arXiv preprint arXiv:2001.00326
  • 2020
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
TLDR
This work presents Dragonfly, an open source Python library for scalable and robust BO, and develops new methodological improvements in BO for selecting the Bayesian model, selecting the acquisition function, and optimising over complex domains with different variable types and additional constraints.
DARTS: Differentiable Architecture Search
TLDR
The proposed algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
NAS-Bench-101: Towards Reproducible Neural Architecture Search
TLDR
This work introduces NAS-Bench-101, the first public architecture dataset for NAS research, which allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset.
...
1
2
3
4
...