• Corpus ID: 202763409

# Max-value Entropy Search for Multi-Objective Bayesian Optimization

@inproceedings{Belakaria2019MaxvalueES,
title={Max-value Entropy Search for Multi-Objective Bayesian Optimization},
author={Syrine Belakaria and Aryan Deshwal and Janardhan Rao Doppa},
booktitle={NeurIPS},
year={2019}
}
• Published in NeurIPS 1 September 2020
• Computer Science, Mathematics
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto-set of solutions by minimizing the number of function evaluations. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive simulations. We propose a novel approach referred to as Max-value Entropy Search for Multi-objective Optimization (MESMO) to solve…

## Figures and Topics from this paper

Output Space Entropy Search Framework for Multi-Objective Bayesian Optimization
• Computer Science, Mathematics
Journal of Artificial Intelligence Research
• 2021
This paper proposes a general framework for solving MOO problems based on the principle of output space entropy (OSE) search: select the experiment that maximizes the information gained per unit resource cost about the true Pareto front.
Uncertainty-Aware Search Framework for Multi-Objective Bayesian Optimization
• Computer Science, Mathematics
AAAI
• 2020
This work proposes a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation to solve the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space Entropy Search Approach
• Computer Science
AAAI
• 2020
Experiments show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms for multi-objective optimization.
Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations
• Computer Science
NeurIPS
• 2020
A novel multi-objective Bayesian optimization algorithm that iteratively selects the best batch of samples to be evaluated in parallel and introduces a batch selection strategy that optimizes for both hypervolume improvement and diversity of selected samples in order to efficiently advance promising regions of the Pareto front.
Multi-objective Bayesian Optimization using Pareto-frontier Entropy
• Computer Science, Mathematics
ICML
• 2020
This paper proposes a novel entropy-based MBO called Pareto-frontier entropy search (PFES), which incorporates dependency among objectives conditioned on Pare to- frontier, which is ignored by the existing method.
Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces
• Computer Science, Mathematics
ArXiv
• 2021
MORBO significantly advances the state-of-theart in sample-efficiency for several high-dimensional synthetic and real-world multi-objective problems, including a vehicle design problem with 222 parameters, demonstrating that MORBO is a practical approach for challenging and important problems that were previously out of reach for BO methods.
Multi-objective Optimization by Learning Space Partitions
• Yiyang Zhao, Kevin Yang, Tian Guo, Yuandong Tian
• Computer Science
ArXiv
• 2021
LaMOO is proposed, a novel multi-objective optimizer that learns a model from observed samples to partition the search space and then focus on promising regions that are likely to contain a subset of the Pareto frontier.
Bayesian Optimization over Permutation Spaces
• Computer Science
ArXiv
• 2021
Two algorithms for BO over Permutation Spaces (BOPS) are proposed and evaluated, showing that both BOPS-T and Bops-H perform better than the state-of-the-art BO algorithm for combinatorial spaces.
No-regret Algorithms for Multi-task Bayesian Optimization
• Computer Science, Mathematics
AISTATS
• 2021
This work addresses the problem of inter-task dependencies using a multi-task kernel and develops two novel BO algorithms based on random scalarizations of the objectives that belong to the upper confidence bound class of algorithms.
Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement
• Computer Science, Mathematics
ArXiv
• 2021
qNEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and it can be optimized effectively with gradient-based methods via sample average approximation and state-of-the-art optimization performance and competitive wall-times in large-batch environments.

## References

SHOWING 1-10 OF 45 REFERENCES
Uncertainty-Aware Search Framework for Multi-Objective Bayesian Optimization
• Computer Science, Mathematics
AAAI
• 2020
This work proposes a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation to solve the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
Predictive Entropy Search for Multi-objective Bayesian Optimization
• Computer Science, Mathematics
ICML
• 2016
The results show that PESMO produces better recommendations with a smaller number of evaluations, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
• Computer Science, Mathematics
NIPS
• 2014
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution.
Max-value Entropy Search for Efficient Bayesian Optimization
• Computer Science, Mathematics
ICML
• 2017
It is observed that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden, and is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
Active Learning for Multi-Objective Optimization
• Computer Science
ICML
• 2013
The results show PAL's effectiveness; in particular it improves significantly over a state-of-the-art multi-objective optimization method, saving in many cases about 33% evaluations to achieve the same accuracy.
ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems
• Joshua D. Knowles
• Mathematics, Computer Science
IEEE Transactions on Evolutionary Computation
• 2006
Results show that NSGA-II, a popular multiobjective evolutionary algorithm, performs well compared with random search, even within the restricted number of evaluations used.
On Test Functions for Evolutionary Multi-objective Optimization
• Computer Science, Mathematics
PPSN
• 2004
This paper presents a straightforward way to define benchmark problems with an arbitrary Pareto front both in the fitness and parameter spaces and introduces a difficulty measure based on the mapping of probability density functions from parameter to fitness space.
A Bayesian approach to constrained single- and multi-objective optimization
• Mathematics, Computer Science
J. Glob. Optim.
• 2017
An extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed, which is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization.
Multiobjective Optimization on a Limited Budget of Evaluations Using Model-Assisted -Metric Selection
• Computer Science, Mathematics
PPSN
• 2008
This paper provides a review of contemporary multiobjective approaches based on the singleobjective meta-model-assisted 'Efficient Global Optimization' (EGO) procedure and describes their main concepts and introduces a new EGO-based MOOA, which utilizes the $\mathcal{S}$-metric or hypervolume contribution to decide which solution is evaluated next.
Scalable Combinatorial Bayesian Optimization with Tractable Statistical models
• Computer Science, Mathematics
ArXiv
• 2020
PSR approach relies on reformulation of AFO problem as submodular relaxation with some unknown parameters, which can be solved efficiently using minimum graph cut algorithms and construction of an optimization problem to estimate the unknown parameters with close approximation to the true objective.