• Corpus ID: 214743051

# Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints

@article{GarridoMerchan2020ParallelPE,
title={Parallel Predictive Entropy Search for Multi-objective Bayesian Optimization with Constraints},
author={Eduardo C. Garrido-Merch'an and Daniel Hern'andez-Lobato},
journal={ArXiv},
year={2020},
volume={abs/2004.00601}
}
• Published 1 April 2020
• Computer Science
• ArXiv
Real-world problems often involve the optimization of several objectives under multiple constraints. Furthermore, we may not have an expression for each objective or constraint; they may be expensive to evaluate; and the evaluations can be noisy. These functions are referred to as black-boxes. Bayesian optimization (BO) can efficiently solve the problems described. For this, BO iteratively fits a model to the observations of each black-box. The models are then used to choose where to evaluate…
20 Citations

## Figures and Tables from this paper

Output Space Entropy Search Framework for Multi-Objective Bayesian Optimization
• Computer Science
J. Artif. Intell. Res.
• 2021
This paper proposes a general framework for solving MOO problems based on the principle of output space entropy (OSE) search: select the experiment that maximizes the information gained per unit resource cost about the true Pareto front.
Max-value Entropy Search for Multi-Objective Bayesian Optimization
• Computer Science
NeurIPS
• 2019
This work proposes a novel approach referred to as Max-value Entropy Search for Multi-objective Optimization (MESMO), which employs an output-space entropy based acquisition function to efficiently select the sequence of inputs for evaluation for quickly uncovering high-quality solutions.
Machine Learning Enabled Design Automation and Multi-Objective Optimization for Electric Transportation Power Systems
• Computer Science
IEEE Transactions on Transportation Electrification
• 2022
An automated design and optimization framework for electric transportation power systems (ETPS) enabled by machine learning (ML) and a novel BO algorithm referred to as max-value entropy search for multiobjective optimization with constraints (MESMOC) to solve multiobjectives optimization (MOO) problems with black-box constraints that can only be evaluated through design simulations.
Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization
• Computer Science
NeurIPS
• 2020
This work derives a novel formulation of Expected Hypervolume Improvement, an acquisition function that extends EHVI to the parallel, constrained evaluation setting and demonstrates that it is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.
Max-value Entropy Search for Multi-objective Bayesian Optimization with Constraints
• Computer Science
ArXiv
• 2020
A Bayesian optimization method that can be used to solve constrained multi-objective problems when the objectives and the constraints are expensive to evaluate, and its execution time is smaller than other information-based methods.
$\{\text{PF}\}^2\text{ES}$: Parallel Feasible Pareto Frontier Entropy Search for Multi-Objective Bayesian Optimization Under Unknown Constraints
• Computer Science
• 2022
This work presents Parallel Feasible Pareto Frontier Entropy Search ( { PF ES) — a novel information-theoretic acquisition function for multi-objective Bayesian optimization that provides a low cost and accurate estimate of the mutual information for the parallel setting.
Multiobjective Tree-Structured Parzen Estimator
• Computer Science
J. Artif. Intell. Res.
• 2022
It is demonstrated that MOTPE approximates the Pareto fronts of a variety of benchmark problems and a convolutional neural network design problem better than existing methods through the numerical results.
PF2ES: Parallel Feasible Pareto Frontier Entropy Search for Multi-Objective Bayesian Optimization Under Unknown Constraints
• Computer Science
ArXiv
• 2022
This work presents Parallel Feasible Pareto Frontier Entropy Search ( { PF ES) — a novel information-theoretic acquisition function for multi-objective Bayesian optimization that provides a low cost and accurate estimate of the mutual information for the parallel setting.
SMGO-$\Delta$: Balancing Caution and Reward in Global Optimization with Black-Box Constraints
• Computer Science
• 2022
A global optimization technique for problems with black-box objective and constraints, named Set Membership Global Optimization With Black-Box Constraints (SMGO-∆), features one tunable risk parameter, which the user can intuitively adjust to trade-off safety, exploitation, and exploration.

## References

SHOWING 1-10 OF 46 REFERENCES
Predictive entropy search for multiobjective Bayesian optimization with constraints. Neurocomputing
• 2019
• In ICML 2015 AutoML Workshop,
• 2015
Expectation Propagation for approximate Bayesian inference
Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network, which makes it applicable to hybrid networks with discrete and continuous nodes.
Parallel Predictive Entropy Search for Batch Global Optimization of Expensive Objective Functions
• Computer Science
NIPS
• 2015
PPES is the first non-greedy batch Bayesian optimization strategy and the benefit of this approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics.
Entropy Search for Information-Efficient Global Optimization
• Computer Science
J. Mach. Learn. Res.
• 2012
This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly addresses the decision problem of maximizing information gain from each evaluation.
Practical Bayesian Optimization of Machine Learning Algorithms
• Computer Science
NIPS
• 2012
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Hyperparameter optimization. In Automated Machine Learning, pages 3–33
• 2019
Batch Bayesian Optimization via Multi-objective Acquisition Ensemble for Automated Analog Circuit Design
• Computer Science
ICML
• 2018
The experimental results show that the proposed batch Bayesian optimization approach is competitive compared with the state-of-the-art algorithms using analytical benchmark functions and real-world analog integrated circuits.