Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
This work demonstrates that this problem can be tackled eﬀectively with automated HPO, and shows that tuning of several MBRL hyperparameter dynamically, i.e. during the training itself, further improves the performance compared to using static hyperparameters which are kept static for the whole training.
Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework
RL is a robust candidate for learning configuration policies, outperforming standard parameter optimization approaches, such as classical algorithm configuration; based on function approximation, RL agents can learn to generalize to new types of instances; and self-paced learning can substantially improve the performance by selecting a useful sequence of training instances automatically.
Towards White-box Benchmarks for Algorithm Control
This work forms the problem of adjusting an algorithm's hyperparameters for a given instance on the fly as a contextual MDP, making reinforcement learning (RL) the prime candidate to solve the resulting algorithm control problem in a data-driven way.
TempoRL: Learning When to Act
This work proposes a proactive setting in which the agent not only selects an action in a state but also for how long to commit to that action, and introduces skip connections between states and learns a skip-policy for repeating the same action along these skips.
Efficient Parameter Importance Analysis via Ablation with Surrogates
- André Biedenkapp, M. Lindauer, Katharina Eggensperger, F. Hutter, C. Fawcett, H. Hoos
- Computer ScienceAAAI
- 12 February 2017
It is shown how the running time cost of ablation analysis, a wellknown general-purpose approach for assessing parameter importance, can be reduced substantially by using regression models of algorithm performance constructed from data collected during the configuration process.
CAVE: Configuration Assessment, Visualization and Evaluation
CAVE aims to help algorithm and configurator developers to better understand their experimental setup in an automated fashion by providing a tool that automatically generates comprehensive reports and insightful figures from all available empirical data.
BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters
A comprehensive tool suite for effective multi-fidelity Bayesian optimization and the analysis of its runs is introduced, written in Python, that provides a simple way to specify complex design spaces, a robust and efficient combination of Bayesian optimize and HyperBand, and a comprehensive analysis of the optimization process and its outcomes.
Learning Step-Size Adaptation in CMA-ES
Sample-Efficient Automated Deep Reinforcement Learning
A population-based automated RL (AutoRL) framework to meta-optimize arbitrary off-policy RL algorithms and optimize the hyperparameters and also the neural architecture while simultaneously training the agent by sharing the collected experience across the population to substantially increase the sample efficiency of the meta- Optimization.
Towards Assessing the Impact of Bayesian Optimization's Own Hyperparameters
- M. Lindauer, Matthias Feurer, Katharina Eggensperger, André Biedenkapp, F. Hutter
- Computer ScienceArXiv
- 19 August 2019
It is shown that tuning can improve the any-time performance of different BO approaches, that optimized BO settings also perform well on similar problems and partially even on problems from other problem families, and which BO hyperparameters are most important.