• Corpus ID: 235420715

Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization

@article{Nguyen2021TrustedMaximizersES,
  title={Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization},
  author={Quoc Phong Nguyen and Zhaoxuan Wu and Kian Hsiang Low and Patrick Jaillet},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.14465}
}
Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function. However, they usually require several approximations or simplifying assumptions (without clearly understanding their effects on the BO performance) and/or their generalization to batch BO is computationally unwieldy, especially with an increasing batch size. To alleviate these issues, this paper presents a novel trusted-maximizers entropy search (TES… 

Figures from this paper

Bayesian Optimization under Stochastic Delayed Feedback

TLDR
Algorithms with sub-linear regret guarantees that address the dilemma of selecting new function queries while waiting for randomly delayed feedback are proposed.

Recent Advances in Bayesian Optimization

TLDR
This paper attempts to provide a comprehensive and updated survey of recent advances in Bayesian optimization and identify interesting open problems and promising future research directions.

Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity

TLDR
A fast approximation of the worst- Case expected value based on the notion of worst-case sensitivity that caters to arbitrary convex distribution distances is developed and it is empirically shown it is competitive with that using the exact worst- case expected value while incur-ring significantly less computation time.

Differentially Private Federated Bayesian Optimization with Distributed Exploration

TLDR
The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off.

References

SHOWING 1-10 OF 39 REFERENCES

Output-Space Predictive Entropy Search for Flexible Global Optimization

TLDR
This work considers the latent maximizer as a random variable and selects the point which results in the greatest reduction in posterior entropy, which greatly simplifies the required approximations and allows for further extensions of the optimizer.

Bayesian Optimization Meets Bayesian Optimal Stopping

TLDR
This paper proposes to unify BO (specifically, Gaussian process-upper confidence bound (GP-UCB) with Bayesian optimal stopping (BO-BOS) to boost the epoch efficiency of BO and empirically evaluates the performance of BO-B OS and demonstrates its generality in hyperparameter optimization of ML models and two other interesting applications.

Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions

TLDR
PPES is the first non-greedy batch Bayesian optimization strategy and the benefit of this approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics.

Predictive Entropy Search for Efficient Global Optimization of Black-box Functions

TLDR
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution.

Max-value Entropy Search for Efficient Bayesian Optimization

TLDR
It is observed that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden, and is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.

Bayesian Optimization with Binary Auxiliary Information

This paper presents novel mixed-type Bayesian optimization (BO) algorithms to accelerate the optimization of a target objective function by exploiting correlated auxiliary information of binary type

Practical bayesian optimization

TLDR
This work examines the last of the Bayesian response-surface approach to global optimization, which maintains a posterior model of the function being optimized by combining a prior over functions with accumulating function evaluations.

Practical Bayesian Optimization of Machine Learning Algorithms

TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

Batch Bayesian Optimization via Local Penalization

TLDR
A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives.

Distributed Batch Gaussian Process Optimization

TLDR
Empirical evaluation on synthetic benchmark objective functions and a real-world optimization problem shows that DB-GP-UCB outperforms the state-of-the-art batch BO algorithms.