• Corpus ID: 238419059

Multi-objective Optimization by Learning Space Partitions

@article{Zhao2021MultiobjectiveOB,
  title={Multi-objective Optimization by Learning Space Partitions},
  author={Yiyang Zhao and Linnan Wang and Kevin Yang and Tianjun Zhang and Tian Guo and Yuandong Tian},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.03173}
}
In contrast to single-objective optimization (SOO), multi-objective optimization (MOO) requires an optimizer to find the Pareto frontier, a subset of feasible solutions that are not dominated by other feasible solutions. In this paper, we propose LaMOO, a novel multi-objective optimizer that learns a model from observed samples to partition the search space and then focus on promising regions that are likely to contain a subset of the Pareto frontier. The partitioning is based on the dominance… 
1 Citations
FuncPipe: A Pipelined Serverless Framework for Fast and Cost-efficient Training of Deep Learning Models
TLDR
F UNC P IPE is designed with the key insight that model partitioning can be leveraged to bridge both memory and bandwidth gaps between the capacity of serverless functions and the requirement of DL training and achieves up to 77% cost savings and 2.2X speedup comparing to state-of-the-art.

References

SHOWING 1-10 OF 65 REFERENCES
Pareto Rank Learning in Multi-objective Evolutionary Algorithms
TLDR
Experimental study on 19 standard multi-objective benchmark test problems concludes that Pareto rank learning enhanced MOEA led to significant speedup over the state-of-the-art NSGA-II, MOEA/D and SPEA2.
A Flexible Multi-Objective Bayesian Optimization Approach using Random Scalarizations
TLDR
This work proposes an approach based on random scalarizations of the objectives that can focus its sampling on certain regions of the Pareto front while being flexible enough to sample from the entire Pare to front if required, and is less computationally demanding compared to other existing approaches.
Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
TLDR
LA-MCTS serves as a meta-algorithm by using existing black- box optimizers as its local models, achieving strong performance in general black-box optimization and reinforcement learning benchmarks, in particular for high-dimensional problems.
Max-value Entropy Search for Multi-Objective Bayesian Optimization
TLDR
This work proposes a novel approach referred to as Max-value Entropy Search for Multi-objective Optimization (MESMO), which employs an output-space entropy based acquisition function to efficiently select the sequence of inputs for evaluation for quickly uncovering high-quality solutions.
ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems
TLDR
Results show that NSGA-II, a popular multiobjective evolutionary algorithm, performs well compared with random search, even within the restricted number of evaluations used.
Multiobjective Optimization on a Limited Budget of Evaluations Using Model-Assisted -Metric Selection
TLDR
This paper provides a review of contemporary multiobjective approaches based on the singleobjective meta-model-assisted 'Efficient Global Optimization' (EGO) procedure and describes their main concepts and introduces a new EGO-based MOOA, which utilizes the $\mathcal{S}$-metric or hypervolume contribution to decide which solution is evaluated next.
Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization
TLDR
This work derives a novel formulation of Expected Hypervolume Improvement, an acquisition function that extends EHVI to the parallel, constrained evaluation setting and demonstrates that it is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.
Learning Space Partitions for Path Planning
TLDR
A novel formal regret analysis is developed for when and why such an adaptive region partitioning scheme works and a new path planning method LaP 3 is proposed which improves the function value estimation within each sub-region, and uses a latent representation of the search space.
A mono surrogate for multiobjective optimization
TLDR
The proposed approach aims at building a global surrogate model defined on the decision space and tightly characterizing the current Pareto set and the dominated region, in order to speed up the evolution progress toward the true Pare to set.
The balance between proximity and diversity in multiobjective evolutionary algorithms
TLDR
It is argued that the development of newMOEAs cannot converge onto a single new most efficient MOEA because the performance of MOEAs shows characteristics of multiobjective problems.
...
...