• Corpus ID: 229181122

Are we Forgetting about Compositional Optimisers in Bayesian Optimisation?

@article{Grosnit2021AreWF,
  title={Are we Forgetting about Compositional Optimisers in Bayesian Optimisation?},
  author={Antoine Grosnit and Alexander Imani Cowen-Rivers and Rasul Tutunov and Ryan-Rhys Griffiths and Jun Wang and Haitham Bou-Ammar},
  journal={ArXiv},
  year={2021},
  volume={abs/2012.08240}
}
Bayesian optimisation presents a sample-efficient methodology for global optimisation. Within this framework, a crucial performance-determining subroutine is the maximisation of the acquisition function, a task complicated by the fact that acquisition functions tend to be non-convex and thus nontrivial to optimise. In this paper, we undertake a comprehensive empirical study of approaches to maximise the acquisition function. Additionally, by deriving novel, yet mathematically equivalent… 
HEBO: An Empirical Study of Assumptions in Bayesian Optimisation
TLDR
The findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multiobjective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts.
Achieving robustness to aleatoric uncertainty with heteroscedastic Bayesian optimisation
TLDR
This paper proposes a heteroscedastic Bayesian optimisation scheme capable of representing and minimising aleatoric noise across the input space and introduces the aleATORic noise-penalised expected improvement (ANPEI) heuristic.
Sample-Efficient Optimisation with Probabilistic Transformer Surrogates
TLDR
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian Optimisation and introduces a BO-tailored training prior supporting non-uniformly distributed points and a novel approximate posterior regulariser trading-off accuracy and input sensitivity to favourable stationary points for improved predictive performance.
High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning
TLDR
A method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces is introduced, using label guidance from the blackbox function to structure the VAE latent space, facilitating the Gaussian process fit and yielding improved BO performance.
AntBO: Towards Real-World Automated Antibody Design with Combinatorial Bayesian Optimisation
TLDR
The results across 188 antigens demonstrate the benefit of AntBO in designing CDRH3 regions with diverse biophysical properties, and in under 200 protein designs, AntBO can suggest antibody sequences that outperform the best binding sequence drawn from 6.9 million experimentally obtained CD RH3s and a commonly used genetic algorithm baseline.
Gryffin: An algorithm for Bayesian optimization of categorical variables informed by expert knowledge
TLDR
This research presents a parallel version of the Celada–Seiden cellular automaton that automates the very labor-intensive and therefore time-heavy and expensive and therefore expensive and expensive process of Berkeley’s famed “Black-box” system.
My title
  • 2022

References

SHOWING 1-10 OF 159 REFERENCES
HEBO: An Empirical Study of Assumptions in Bayesian Optimisation
TLDR
The findings indicate that the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, multiobjective acquisition ensembles with Pareto front solutions improve queried configurations, and robust acquisition maximisers afford empirical advantages relative to their non-robust counterparts.
Parallelised Bayesian Optimisation via Thompson Sampling
TLDR
This work design and analyse variations of the classical Thompson sampling procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive but can be performed in parallel and shows that asynchronous TS outperforms a suite of existing parallel BO algorithms in simulations and in an application involving tuning hyper-parameters of a convolutional neural network.
Achieving robustness to aleatoric uncertainty with heteroscedastic Bayesian optimisation
TLDR
This paper proposes a heteroscedastic Bayesian optimisation scheme capable of representing and minimising aleatoric noise across the input space and introduces the aleATORic noise-penalised expected improvement (ANPEI) heuristic.
A Tutorial on Bayesian Optimization
TLDR
This tutorial describes how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient, and provides a generalization of expected improvement to noisy evaluations beyond the noise-free setting where it is more commonly applied.
Scalable Bayesian Optimization Using Deep Neural Networks
TLDR
This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism.
Maximizing acquisition functions for Bayesian optimization
TLDR
This work shows that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization and identifies a common family of acquisition functions, including EI and UCB, whose characteristics not only facilitate but justify use of greedy approaches for their maximization.
High-Dimensional Bayesian Optimisation with Variational Autoencoders and Deep Metric Learning
TLDR
A method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces is introduced, using label guidance from the blackbox function to structure the VAE latent space, facilitating the Gaussian process fit and yielding improved BO performance.
A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes
TLDR
This paper proposes MF-MI-Greedy, a principled algorithmic framework for addressing multi-fidelity Bayesian optimization with complex structural dependencies among multiple outputs, and proposes a simple notion of regret which incorporates the cost of different fidelities.
Compositional ADAM: An Adaptive Compositional Solver
In this paper, we present C-ADAM, the first adaptive solver for compositional problems involving a non-linear functional nesting of expected values. We proof that C-ADAM converges to a stationary
Bayesian Optimization of Composite Functions
TLDR
This work proposes a novel approach that exploits the composite structure of the objective function to substantially improve sampling efficiency and provides a novel stochastic gradient estimator that allows its efficient maximization.
...
...