• Corpus ID: 85459638

Sampling Acquisition Functions for Batch Bayesian Optimization

@article{Palma2019SamplingAF,
  title={Sampling Acquisition Functions for Batch Bayesian Optimization},
  author={Alessandro De Palma and Celestine Mendler-D{\"u}nner and Thomas Parnell and Andreea Anghel and Haralampos Pozidis},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.09434}
}
This paper presents Acquisition Thompson Sampling (ATS), a novel algorithm for batch Bayesian Optimization (BO) based on the idea of sampling multiple acquisition functions from a stochastic process. We define this process through the dependency of the acquisition functions on a set of model parameters. ATS is conceptually simple, straightforward to implement and, unlike other batch BO methods, it can be employed to parallelize any sequential acquisition function. In order to improve… 

Figures and Tables from this paper

ϵ-shotgun: ϵ-greedy batch bayesian optimisation
TLDR
This work presents an ϵ-greedy procedure for Bayesian optimisation in batch settings in which the black-box function can be evaluated multiple times in parallel and finds that it performs at least as well as state-of-the-art batch methods and in many cases exceed their performance.
Asynchronous ε-Greedy Bayesian Optimisation
TLDR
A novel asynchronous BO method, AEGiS (Asynchronous (cid:15) -Greedy Global Search) that combines greedy search, exploiting the surrogate’s mean prediction, with Thompson sampling and random selection from the approximate Pareto set describing the trade-off between exploitation and exploration is developed.
ANP-BBO: Attentive Neural Processes and Batch Bayesian Optimization for Scalable Calibration of Physics-Informed Digital Twins /Author=Chakrabarty, Ankush; Wichern, Gordon; Laughman, Christopher R. /CreationDate=August 10, 2021 /Subject=Machine Learning, Multi-Physical Modeling, Optimization
TLDR
To handle largescale calibration of digital twins without exorbitant simulations, this work proposes ANPBBO: a scalable and parallelizable batch-wise Bayesian optimization (BBO) methodology that leverages attentive neural processes (ANPs).
Asynchronous \epsilon-Greedy Bayesian Optimisation
TLDR
A novel asynchronous BO method, AEGiS (Asynchronous $\epsilon$-Greedy Global Search) that combines greedy search, exploiting the surrogate's mean prediction, with Thompson sampling and random selection from the approximate Pareto set is developed.
A New Bayesian Optimization Algorithm for Complex High-Dimensional Disease Epidemic Systems
TLDR
The Improved Bayesian Optimization algorithm adds a series of Adam-based steps at the final stage of the algorithm to increase the solution's accuracy and has a great potential to solve other complex optimal control problems with high dimensionality.
Recent Advances in Bayesian Optimization
TLDR
This paper attempts to provide a comprehensive and updated survey of recent advances in Bayesian optimization and identify interesting open problems and promising future research directions.
Attentive Neural Processes and Batch Bayesian Optimization for Scalable Calibration of Physics-Informed Digital Twins
TLDR
To handle large-scale calibration of digital twins without exorbitant simulations, this work proposes ANP-BBO: a scalable and parallelizable batch-wise Bayesian optimization (BBO) methodology that leverages attentive neural processes (ANPs).
Efficient Closed-loop Maximization of Carbon Nanotube Growth Rate using Bayesian Optimization
TLDR
A promising application of BO is demonstrated in CNT synthesis as an efficient and robust algorithm which can improve the growth rate of CNT in the BO-planner experiments over the seed experiments up to a factor 8 and rapidly improve its predictive power.

References

SHOWING 1-10 OF 32 REFERENCES
Parallelised Bayesian Optimisation via Thompson Sampling
TLDR
This work design and analyse variations of the classical Thompson sampling procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive but can be performed in parallel and shows that asynchronous TS outperforms a suite of existing parallel BO algorithms in simulations and in an application involving tuning hyper-parameters of a convolutional neural network.
Batched Gaussian Process Bandit Optimization via Determinantal Point Processes
TLDR
This paper proposes a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically, and indicates that DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.
Exploiting Strategy-Space Diversity for Batch Bayesian Optimization
TLDR
This paper proposes a novel approach to batch Bayesian optimisation using a multiobjective optimisation framework with exploitation and exploration forming two objectives that can efficiently handle the optimisation of a variety of functions with small to large number of local extrema.
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Portfolio Allocation for Bayesian Optimization
TLDR
A portfolio of acquisition functions governed by an online multi-armed bandit strategy is proposed, the best of which is called GP-Hedge, and it is shown that this method outperforms the best individual acquisition function.
Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions
TLDR
PPES is the first non-greedy batch Bayesian optimization strategy and the benefit of this approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics.
Batch Bayesian Optimization via Local Penalization
TLDR
A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives.
Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design
TLDR
The experimental results show that the proposed batch Bayesian optimization approach is competitive compared with the state-of-the-art algorithms using analytical benchmark functions and real-world analog integrated circuits.
No-regret Bayesian Optimization with Unknown Hyperparameters
TLDR
This paper presents the first BO algorithm that is provably no-regret and converges to the optimum without knowledge of the hyperparameters, and proposes several practical algorithms that achieve the empirical sample efficiency of BO with online hyperparameter estimation, but retain theoretical convergence guarantees.
Distributed Batch Gaussian Process Optimization
TLDR
Empirical evaluation on synthetic benchmark objective functions and a real-world optimization problem shows that DB-GP-UCB outperforms the state-of-the-art batch BO algorithms.
...
...