Bayesian Optimization in a Billion Dimensions via Random Embeddings

@article{Wang2016BayesianOI,
  title={Bayesian Optimization in a Billion Dimensions via Random Embeddings},
  author={Ziyun Wang and Masrour Zoghi and Frank Hutter and David Matheson and Nando de Freitas},
  journal={J. Artif. Intell. Res.},
  year={2016},
  volume={55},
  pages={361-387}
}
Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several workshops on Bayesian optimization have identified its scaling to high-dimensions as one of the holy grails of the field. In this paper, we introduce a novel random embedding idea to attack this… 

Figures and Tables from this paper

Scalable Global Optimization via Local Bayesian Optimization

The TuRBO algorithm is proposed that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach and outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.

Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization

It is shown empirically that properly addressing crucial issues and misconceptions about the use of linear embeddings for Bayesian optimization significantly improves the efficacy of linear embeddeds for BO on a range of problems, including learning a gait policy for robot locomotion.

A Survey on High-dimensional Gaussian Process Modeling with Application to Bayesian Optimization

The defining structural model assumptions of high-dimensional BO are reviewed and the benefits and drawbacks of these approaches in practice are discussed.

Efficient Bayesian Optimization Based on Parallel Sequential Random Embeddings

This study proposes a Bayesian optimization method that uses random embedding to remain efficient even if the embedded dimension is lower than the effective dimensions.

Bayesian Optimization for Policy Search in High-Dimensional Systems via Automatic Domain Selection

This paper shows how to make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem by focusing onto the most relevant regions of the optimization domain and presents a method to find an embedding in parameter space that reduces the effective dimensionality of the optimize problem.

Scalable Constrained Bayesian Optimization

This work proposes the scalable constrained Bayesian optimization (SCBO) algorithm, which achieves excellent results on a variety of benchmarks and proposes two new control problems that are expected to be of independent value for the scientific community.

A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality

This work provides novel probabilistic bounds for the success of REGO in solving the original, low effective-dimensionality problem, which show its independence of the (potentially large) ambient dimension and its precise dependence on the dimensions of the effective and randomly embedding subspaces.

Parallel Sequential Random Embedding Bayesian Optimization

This study proposes a Bayesian optimization method that uses random embedding that remains efficient even if the embedded dimension is lower than the effective dimensions and is challenged in experiments on benchmark problems, which confirm its effectiveness.

Batched Large-scale Bayesian Optimization in High-dimensional Spaces

This paper proposes ensemble Bayesian optimization (EBO) to address three current challenges in BO simultaneously: large-scale observations; high dimensional input spaces; and selections of batch queries that balance quality and diversity.

Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

This work proposes an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently and is the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings.
...

References

SHOWING 1-10 OF 79 REFERENCES

Bayesian Optimization in High Dimensions via Random Embeddings

A novel random embedding idea is introduced to attack high-dimensional Bayesian optimization problems, and the resulting Random EMbedding Bayesian Optimization (REMBO) algorithm is very simple and applies to domains with both categorical and continuous variables.

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.

Bayesian Multi-Scale Optimistic Optimization

This paper introduces a new technique for efficient global optimization that combines Gaussian process confidence bounds and treed simultaneous optimistic optimization to eliminate the need for auxiliary optimization of acquisition functions.

Practical bayesian optimization

This work examines the last of the Bayesian response-surface approach to global optimization, which maintains a posterior model of the function being optimized by combining a prior over functions with accumulating function evaluations.

Multi-Task Bayesian Optimization

This paper proposes an adaptation of a recently developed acquisition function, entropy search, to the cost-sensitive, multi-task setting and demonstrates the utility of this new acquisition function by leveraging a small dataset to explore hyper-parameter settings for a large dataset.

Batch Bayesian Optimization via Simulation Matching

This paper proposes a novel approach to batch Bayesian optimization, providing a policy for selecting batches of inputs with the goal of optimizing the function as efficiently as possible, by using Monte-Carlo simulation.

Practical Bayesian Optimization of Machine Learning Algorithms

This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures

This work proposes a meta-modeling approach to support automated hyperparameter optimization, with the goal of providing practical tools that replace hand-tuning with a reproducible and unbiased optimization process.

High-Dimensional Gaussian Process Bandits

The SI-BO algorithm is presented, which leverages recent low-rank matrix recovery techniques to learn the underlying subspace of the unknown function and applies Gaussian Process Upper Confidence sampling for optimization of the function.

Hybrid Batch Bayesian Optimization

This work systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provides a hybrid algorithm that, based on the current state, dynamically switches between a sequential policy and a batch policy with variable batch sizes.
...