Corpus ID: 235485339

Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks

@article{Li2021BatchMB,
  title={Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks},
  author={Shibo Li and Robert M. Kirby and Shandian Zhe},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.09884}
}
Bayesian optimization (BO) is a powerful approach for optimizing black-box, expensive-to-evaluate functions. To enable a flexible trade-off between the cost and accuracy, many applications allow the function to be evaluated at different fidelities. In order to reduce the optimization cost while maximizing the benefitcost ratio, in this paper we propose Batch Multi-fidelity Bayesian Optimization with Deep Auto-Regressive Networks (BMBO-DARN). We use a set of Bayesian neural networks to construct… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 60 REFERENCES
Scalable Bayesian Optimization Using Deep Neural Networks
TLDR
This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism. Expand
Continuous-fidelity Bayesian Optimization with Knowledge Gradient
TLDR
A novel Bayesian optimization algorithm, the continuous-fidelity knowledge gradient (cfKG) method, that can be used when fidelity is controlled by one or more continuous settings such as training data size and the number of training iterations, which outperforms state-of-art algorithms when optimizing synthetic functions. Expand
Scalable Hyperparameter Transfer Learning
TLDR
This work proposes a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations: one Bayesianlinear regression model is associated to each black-box function optimization problem (or task), while transfer learning is achieved by coupling the models through a shared deep neural net. Expand
A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes
TLDR
This paper proposes MF-MI-Greedy, a principled algorithmic framework for addressing multi-fidelity Bayesian optimization with complex structural dependencies among multiple outputs, and proposes a simple notion of regret which incorporates the cost of different fidelities. Expand
Bayesian Optimization with Robust Bayesian Neural Networks
TLDR
This work presents a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible and obtaining scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness is improved via a scale adaptation. Expand
Multi-fidelity Bayesian Optimization with Max-value Entropy Search
TLDR
This work proposes a novel information theoretic approach to multi-fidelity Bayesian optimization (MFBO) based on a variant of information-based BO called max-value entropy search (MES), which greatly facilitates evaluation of the information gain in MFBO. Expand
BOHB: Robust and Efficient Hyperparameter Optimization at Scale
TLDR
This work proposes a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian Neural networks, deep reinforcement learning, and convolutional neural networks. Expand
Multi-Information Source Optimization
TLDR
This work presents a novel algorithm that provides a rigorous mathematical treatment of the uncertainties arising from model discrepancies and noisy observations, and conducts an experimental evaluation that demonstrates that the method consistently outperforms other state-of-the-art techniques. Expand
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
TLDR
A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. Expand
Multi-fidelity Bayesian Optimisation with Continuous Approximations
TLDR
This work develops a Bayesian optimisation method, BOCA, that achieves better regret than than strategies which ignore the approximations and outperforms several other baselines in synthetic and real experiments. Expand
...
1
2
3
4
5
...