Corpus ID: 204575755

BoTorch: Programmable Bayesian Optimization in PyTorch

@article{Balandat2019BoTorchPB,
  title={BoTorch: Programmable Bayesian Optimization in PyTorch},
  author={Maximilian Balandat and Brian Karrer and Daniel R. Jiang and Sam Daulton and Benjamin Letham and Andrew Gordon Wilson and Eytan Bakshy},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.06403}
}
Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization that combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques. BoTorch's modular design facilitates flexible specification… Expand
Prior-guided Bayesian Optimization
TLDR
Prior-guided Bayesian Optimization (PrBO) allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users. Expand
Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees
TLDR
This paper provides the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi- step scenario tree, and equivalently optimize all decision variables in the full tree jointly, in a ``one-shot'' fashion. Expand
URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference Methods for Deep Neural Networks
TLDR
Initial work is described on the development ofURSABench, an open-source suite of bench-marking tools for comprehensive assessment of approximate Bayesian inference methods with a focus on deep learning-based classification tasks. Expand
Open source evolutionary structured optimization
TLDR
This work explains how this API can help analyze optimization methods and how to use it for the optimization of a structured Photonics physical testbed, and shows that this can produce significant improvements. Expand
Quantity vs. Quality: On Hyperparameter Optimization for Deep Reinforcement Learning
TLDR
It is concluded that Bayesian optimization with a noise robust acquisition function is the best choice for hyperparameter optimization in reinforcement learning tasks. Expand
Lookahead Acquisition Functions for Finite-Horizon Time-Dependent Bayesian Optimization and Application to Quantum Optimal Control
We propose a novel Bayesian method to solve the maximization of a time-dependent expensiveto-evaluate stochastic oracle. We are interested in the decision that maximizes the oracle at a finite timeExpand
Scaling Hamiltonian Monte Carlo Inference for Bayesian Neural Networks with Symmetric Splitting
TLDR
This work introduces a new symmetric integration scheme for split HMC that does not rely on stochastic gradients and is easy to implement with a single GPU, demonstrating HMC as a feasible option when considering inference schemes for large-scale machine learning problems. Expand
Asynchronous ε-Greedy Bayesian Optimisation
TLDR
A novel asynchronous BO method, AEGiS (Asynchronous -Greedy Global Search) that combines greedy search, exploiting the surrogate’s mean prediction, with Thompson sampling and random selection from the approximate Pareto set describing the trade-off between exploitation and exploration is developed. Expand
Bayesian Optimization for Min Max Optimization
TLDR
The Bayesian Optimization setting is extended to extend the two acquisition functions Entropy Search and Knowledge Gradient, and it is shown that these acquisition functions allow for better solutions converging faster to the optimum than the benchmark settings. Expand
Amazon SageMaker Automatic Model Tuning: Scalable Black-box Optimization
TLDR
Amazon SageMaker Automatic Model Tuning (AMT), a fully managed system for black-box optimization at scale, which finds the best version of a machine learning model by repeatedly training it with different hyperparameter configurations. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 110 REFERENCES
RoBO : A Flexible and Robust Bayesian Optimization Framework in Python
TLDR
The BSD-licensed python package ROBO, released with this paper, offers the only available implementations of Bayesian optimization with Bayesian neural networks, multi-task optimization, and fast Bayesian hyperparameter optimization on large datasets (Fabolas). Expand
Bayesian Optimization with Robust Bayesian Neural Networks
TLDR
This work presents a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible and obtaining scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness is improved via a scale adaptation. Expand
The reparameterization trick for acquisition functions
TLDR
Here, it is demonstrated how many popular acquisition functions can be formulated as Gaussian integrals amenable to the reparameterization trick and, ensuingly, gradient-based optimization. Expand
GPflowOpt: A Bayesian Optimization Library using TensorFlow
A novel Python framework for Bayesian optimization known as GPflowOpt is introduced. The package is based on the popular GPflow library for Gaussian processes, leveraging the benefits of TensorFlowExpand
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization
TLDR
It is demonstrated that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy. Expand
Bayesian Optimization with Gradients
TLDR
This paper develops a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), for which one-step Bayes-optimality, asymptotic consistency, and greater one- step value of information than is possible in the derivatives-free setting are shown. Expand
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
TLDR
A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. Expand
ProBO: Versatile Bayesian Optimization Using Any Probabilistic Programming Language
TLDR
ProBO is developed, a BO procedure that uses only standard operations common to most PPLs, and allows a user to drop in a model built with an arbitrary PPL and use it directly in BO. Expand
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions
TLDR
This work proposes a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES), which codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. Expand
Maximizing acquisition functions for Bayesian optimization
TLDR
This work shows that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization and identifies a common family of acquisition functions, including EI and UCB, whose characteristics not only facilitate but justify use of greedy approaches for their maximization. Expand
...
1
2
3
4
5
...