• Corpus ID: 234762890

Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement

@article{Daulton2021ParallelBO,
  title={Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement},
  author={Sam Daulton and Maximilian Balandat and Eytan Bakshy},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.08195}
}
Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian… 
Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces
TLDR
MORBO significantly advances the state-of-theart in sample-efficiency for several high-dimensional synthetic and real-world multi-objective problems, including a vehicle design problem with 222 parameters, demonstrating that MORBO is a practical approach for challenging and important problems that were previously out of reach for BO methods.
Latency-Aware Neural Architecture Search with Multi-Objective Bayesian Optimization
TLDR
This work leverage recent methodological advances in Bayesian optimization over high-dimensional search spaces and multi-objective Bayesian optimized to efficiently explore these trade-offs for a production-scale on-device natural language understanding model at Facebook.
Advancing the Pareto front for thin-film materials using a self-driving laboratory
Useful materials must satisfy multiple objectives, where the optimization of one objective is often at the expense of another. The Pareto front reports the optimal trade-offs between competing
Looper: An end-to-end ML platform for product decisions
TLDR
Looper supports the full end-to-end ML lifecycle from online data collection to model training, deployment and inference, and extends support to evaluation and tuning of product goals.

References

SHOWING 1-10 OF 75 REFERENCES
Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations
TLDR
A novel multi-objective Bayesian optimization algorithm that iteratively selects the best batch of samples to be evaluated in parallel and introduces a batch selection strategy that optimizes for both hypervolume improvement and diversity of selected samples in order to efficiently advance promising regions of the Pareto front.
A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations
TLDR
This work proposes a strategy based on random scalarizations of the objectives that is able to flexibly sample from desired regions of the Pareto front and, computationally, is considerably cheaper than most approaches for MOO.
A Flexible Multi-Objective Bayesian Optimization Approach using Random Scalarizations
TLDR
This work proposes an approach based on random scalarizations of the objectives that can focus its sampling on certain regions of the Pareto front while being flexible enough to sample from the entire Pare to front if required, and is less computationally demanding compared to other existing approaches.
Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization
TLDR
This paper introduces a novel scalarization function, which it is shown that drawing random scalarizations from an appropriately chosen distribution can be used to efficiently approximate the hypervolume indicator metric, and highlights the general utility of this framework by showing that any provably convergent single-objective optimization process can be effortlessly converted to a multi-objectives optimization process with provable convergence guarantees.
Expected Hypervolume Improvement with Constraints
TLDR
The Expected Hypervolume Improvement is extended by introducing expectation of constraints satisfaction and merging them into a new acquisition function called EHVIC, which is an effective algorithm that provides a promising performance by comparing to a well-known related method.
Multi-Objective Bayesian Global Optimization using expected hypervolume improvement gradient
TLDR
The experimental results show that the second proposed strategy, using EHVIG as a stopping criterion for local search, can outperform the normal MOBGO on problems where the optimal solutions are located in the interior of the search space.
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
TLDR
An efficient algorithm for the exact calculation of the EHVI for in a generic case is proposed, based on partitioning the integration volume into a set of axis-parallel slices and a new hyperbox decomposition technique, which is proposed by Dachert et al.
A Tutorial on Bayesian Optimization
TLDR
This tutorial describes how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient, and provides a generalization of expected improvement to noisy evaluations beyond the noise-free setting where it is more commonly applied.
Multi-fidelity Bayesian Optimization with Max-value Entropy Search
TLDR
This work proposes a novel information theoretic approach to multi-fidelity Bayesian optimization (MFBO) based on a variant of information-based BO called max-value entropy search (MES), which greatly facilitates evaluation of the information gain in MFBO.
Predictive Entropy Search for Multi-objective Bayesian Optimization
TLDR
The results show that PESMO produces better recommendations with a smaller number of evaluations, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.
...
1
2
3
4
5
...