• Corpus ID: 9096686

Batched High-dimensional Bayesian Optimization via Structural Kernel Learning

@article{Wang2017BatchedHB,
  title={Batched High-dimensional Bayesian Optimization via Structural Kernel Learning},
  author={Zi Wang and Chengtao Li and Stefanie Jegelka and Pushmeet Kohli},
  journal={ArXiv},
  year={2017},
  volume={abs/1703.01973}
}
Optimization of high-dimensional black-box functions is an extremely challenging problem. While Bayesian optimization has emerged as a popular approach for optimizing black-box functions, its applicability has been limited to low-dimensional problems due to its computational and statistical challenges arising from high-dimensional settings. In this paper, we propose to tackle these challenges by (1) assuming a latent additive structure in the function and inferring it properly for more… 
Computationally Efficient High-Dimensional Bayesian Optimization via Variable Selection
TLDR
This work develops a new computationally efficient high-dimensional BO method that exploits variable selection and is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables, without the demand of any pre-specified hyperparameters.
High-Dimensional Bayesian Optimization via Tree-Structured Additive Models
TLDR
This paper considers generalized additive models in which low-dimensional functions with overlapping subsets of variables are composed to model a high-dimensional target function and proposes a hybrid graph learning algorithm based on Gibbs sampling and mutation to facilitate both structure learning and optimization of the acquisition function.
High Dimensional Bayesian Optimization via Supervised Dimension Reduction
TLDR
This paper directly introduces a supervised dimension reduction method, Sliced Inverse Regression (SIR), to high dimensional Bayesian optimization, which could effectively learn the intrinsic sub-structure of objective function during the optimization.
High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups
TLDR
This paper significantly generalizes the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables, by representing the dependencies via a graph and deducing an efficient message passing algorithm for optimizing the acquisition function.
A survey on high-dimensional Gaussian process modeling with application to Bayesian optimization
TLDR
The defining structural model assumptions are reviewed and the benefits and drawbacks of these approaches in practice are discussed, including those of variable selection and additive decomposition to low dimensional embeddings and beyond.
Batched Large-scale Bayesian Optimization in High-dimensional Spaces
TLDR
This paper proposes ensemble Bayesian optimization (EBO) to address three current challenges in BO simultaneously: large-scale observations; high dimensional input spaces; and selections of batch queries that balance quality and diversity.
Optimizing Dynamic Structures with Bayesian Generative Search
TLDR
DTERGENS is a novel generative search framework that constructs and optimizes a highperformance composite kernel expressions generator and is capable of obtaining flexible length expressions by jointly optimizing a generative termination criterion.
Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization
TLDR
It is shown empirically that properly addressing crucial issues and misconceptions about the use of linear embeddings for Bayesian optimization significantly improves the efficacy of linear embeddeds for BO on a range of problems, including learning a gait policy for robot locomotion.
Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization
TLDR
This paper introduces a novel variational framework for batch query optimization, based on the argument that the query batch should be selected to have both high diversity and good worst case performance, and introduces a variational objective that combines a quantile-based risk measure and entropy regularization.
CobBO: Coordinate Backoff Bayesian Optimization with Two-Stage Kernels
TLDR
Coordinate backoff Bayesian Optimization with two-stage kernels is introduced with solutions comparable to or better than other state-of-the-art methods for dimensions ranging from tens to hundreds, while reducing both the trial complexity and computational costs.
...
...

References

SHOWING 1-10 OF 23 REFERENCES
High Dimensional Bayesian Optimization via Restricted Projection Pursuit Models
TLDR
It is proved that the regret for projected-additive functions has only linear dependence on the number of dimensions in this general setting and the method outperforms existing approaches even when the function does not meet the projected additive assumption.
High Dimensional Bayesian Optimisation and Bandits via Additive Models
TLDR
It is demonstrated that the method outperforms naive BO on additive functions and on several examples where the function is not additive, and it is proved that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all$D$ dimensions.
High-Dimensional Gaussian Process Bandits
TLDR
The SI-BO algorithm is presented, which leverages recent low-rank matrix recovery techniques to learn the underlying subspace of the unknown function and applies Gaussian Process Upper Confidence sampling for optimization of the function.
Batched Gaussian Process Bandit Optimization via Determinantal Point Processes
TLDR
This paper proposes a new approach for parallelizing Bayesian optimization by modeling the diversity of a batch via Determinantal point processes (DPPs) whose kernels are learned automatically, and indicates that DPP-based methods, especially those based on DPP sampling, outperform state-of-the-art methods.
Batch Bayesian Optimization via Local Penalization
TLDR
A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives.
Batch Bayesian Optimization via Simulation Matching
TLDR
This paper proposes a novel approach to batch Bayesian optimization, providing a policy for selecting batches of inputs with the goal of optimizing the function as efficiently as possible, by using Monte-Carlo simulation.
Bayesian Optimization in a Billion Dimensions via Random Embeddings
TLDR
Empirical results confirm that REMBO can effectively solve problems with billions of dimensions, provided the intrinsic dimensionality is low, and show thatREMBO achieves state-of-the-art performance in optimizing the 47 discrete parameters of a popular mixed integer linear programming solver.
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration
TLDR
The Gaussian Process Upper Confidence Bound and Pure exploration algorithm (GP-UCB-PE) is introduced which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations and proves theoretical upper bounds on the regret with batches of size K for this procedure.
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Optimization as Estimation with Gaussian Processes in Bandit Settings
TLDR
This work studies an optimization strategy that directly uses an estimate of the argmax of the function, offering both practical and theoretical advantages: no tradeoff parameter needs to be selected, and, moreover, close connections to the popular GP-UCB and GP-PI strategies are established.
...
...