• Corpus ID: 235368382

An Empirical Study of Assumptions in Bayesian Optimisation

@inproceedings{CowenRivers2020AnES,
  title={An Empirical Study of Assumptions in Bayesian Optimisation},
  author={Alexander Imani Cowen-Rivers and Wenlong Lyu and Rasul Tutunov and Zhi Wang and Antoine Grosnit and Ryan-Rhys Griffiths and Hao Jianye and Jun Wang and Jan Peters and Haitham Ammar},
  year={2020}
}
In this work we rigorously analyse assumptions inherent to black-box optimisation hyper-parameter tuning tasks. Our results on the Bayesmark benchmark indicate that heteroscedasticity and non-stationarity pose significant challenges for black-box optimisers. Based on these findings, we propose a Heteroscedastic and Evolutionary Bayesian Optimisation solver (HEBO). HEBO performs non-linear input and output warping, admits exact marginal log-likelihood optimisation and is robust to the values of… 
Are we Forgetting about Compositional Optimisers in Bayesian Optimisation?
TLDR
This paper highlights the empirical advantages of the compositional approach to acquisition function maximisation across 3958 individual experiments comprising synthetic optimisation tasks as well as tasks from the 2020 NeurIPS competition on Black-Box Optimisation for Machine Learning.
Efficient and Reactive Planning for High Speed Robot Air Hockey
TLDR
It is shown how to design a policy for general-purpose robotic manipulators for the air hockey game and that a real robot arm can perform fast-hitting movements and that the two robots can play against each other on a medium-size air hockey table in simulation.
B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search
TLDR
This work carefully review the latest multi-trial NAS algorithms and identifies the key strategies including Evolutionary Algorithm (EA), Bayesian Optimization (BO), diversification, input and output transformations, and lower fidelity estimation and develops B2EA that is a surrogate assisted EA with two BO surrogate models and a mutation step in between.
S URROGATE NAS B ENCHMARKS : G OING B EYOND THE L IMITED S EARCH S PACES OF T ABULAR NAS B ENCHMARKS
The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations of NAS
HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO
TLDR
HBPOBench is proposed, which includes 7 existing and 5 new benchmark families, with in total more than 100 multi-fidelity benchmark problems, and provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations.
Reinforced Few-Shot Acquisition Function Learning for Bayesian Optimization
TLDR
This paper first connects the notion of AFs with Q-functions and view a deep Q-network (DQN) as a surrogate differentiable AF and presents a Bayesian variant of DQN with the following three features: It learns a distribution of Q-networks as AFs based on the Kullback-Leibler regularization framework and mitigates overfitting.
Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks
TLDR
It is shown that surrogate NAS benchmarks can model the true performance of architectures better than tabular benchmarks (at a small fraction of the cost), that they lead to faithful estimates of how well different NAS methods work on the original non-surrogate benchmark, and that they can generate new scientific insight.
AntBO: Towards Real-World Automated Antibody Design with Combinatorial Bayesian Optimisation
TLDR
The results across 188 antigens demonstrate the benefit of AntBO in designing CDRH3 regions with diverse biophysical properties, and in under 200 protein designs, AntBO can suggest antibody sequences that outperform the best binding sequence drawn from 6.9 million experimentally obtained CD RH3s and a commonly used genetic algorithm baseline.
High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning
TLDR
A method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces is introduced, using label guidance from the blackbox function to structure the VAE latent space, facilitating the Gaussian process fit and yielding improved BO performance.
BOiLS: Bayesian Optimisation for Logic Synthesis
TLDR
BOiLS is proposed, the first algorithm adapting modern Bayesian optimisation to navigate the space of synthesis operations and superior performance compared to state-of-the-art in terms of both sample efficiency and QoR values is demonstrated.
...
1
2
3
...

References

SHOWING 1-10 OF 80 REFERENCES
Bayesian Optimization with Robust Bayesian Neural Networks
TLDR
This work presents a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible and obtaining scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness is improved via a scale adaptation.
Portfolio Allocation for Bayesian Optimization
TLDR
A portfolio of acquisition functions governed by an online multi-armed bandit strategy is proposed, the best of which is called GP-Hedge, and it is shown that this method outperforms the best individual acquisition function.
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization
TLDR
A novel algorithm is introduced, Hyperband, for hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations.
Multi-fidelity Bayesian Optimisation with Continuous Approximations
TLDR
This work develops a Bayesian optimisation method, BOCA, that achieves better regret than than strategies which ignore the approximations and outperforms several other baselines in synthetic and real experiments.
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
TLDR
NASHBOT is developed, a Gaussian process based BO framework for neural architecture search which outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.
An Entropy Search Portfolio for Bayesian Optimization
TLDR
This work introduces the Entropy Search Portfolio (ESP), a novel approach to portfolio construction which is motivated by information theoretic considerations and shows that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks.
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
TLDR
This work presents Dragonfly, an open source Python library for scalable and robust BO, and develops new methodological improvements in BO for selecting the Bayesian model, selecting the acquisition function, and optimising over complex domains with different variable types and additional constraints.
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
BOHB: Robust and Efficient Hyperparameter Optimization at Scale
TLDR
This work proposes a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian Neural networks, deep reinforcement learning, and convolutional neural networks.
Multi-Fidelity Black-Box Optimization with Hierarchical Partitions
TLDR
This work develops tree-search based multi-fidelity algorithms with theoretical guarantees on simple regret and demonstrates the performance gains of the algorithms on both real and synthetic datasets.
...
1
2
3
4
5
...