• Corpus ID: 235293717

JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data

@article{Hakhamaneshi2021JUMBOSM,
  title={JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data},
  author={Kourosh Hakhamaneshi and P. Abbeel and Vladimir Stojanovic and Aditya Grover},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.00942}
}
The goal of Multi-task Bayesian Optimization (MBO) is to minimize the number of queries required to accurately optimize a target black-box function, given access to offline evaluations of other auxiliary functions. When offline datasets are large, the scalability of prior approaches comes at the expense of expressivity and inference quality. We propose JUMBO, an MBO algorithm that sidesteps these limitations by querying additional data based on a combination of acquisition signals derived from… 

Figures and Tables from this paper

Generative Pretraining for Black-Box Optimization

This work proposes B lack-b o x O ptimization Transfor mer (BOOMER), a generative framework for pretraining black-box optimizers using offline datasets, and introduces mechanisms to control the rate at which a trajectory transitions from exploration to exploitation, and uses it to generalize outside the ofﰂine data at test-time.

Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling

This work proposes Transformer Neural Processes (TNPs), a new member of the NP family that casts uncertainty-aware meta learning as a sequence modeling problem and achieves state-of-the-art performance on various benchmark problems, outperforming all previous NP variants.

Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design

It is shown that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties with up to 10x more sample efficiency compared to a randomly initialized model.

References

SHOWING 1-10 OF 38 REFERENCES

Multi-Task Bayesian Optimization

This paper proposes an adaptation of a recently developed acquisition function, entropy search, to the cost-sensitive, multi-task setting and demonstrates the utility of this new acquisition function by leveraging a small dataset to explore hyper-parameter settings for a large dataset.

Scalable Bayesian Optimization Using Deep Neural Networks

This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism.

Scalable Hyperparameter Transfer Learning

This work proposes a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations: one Bayesianlinear regression model is associated to each black-box function optimization problem (or task), while transfer learning is achieved by coupling the models through a shared deep neural net.

Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization

This work proposes a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing the algorithm to utilize the proven generalization capabilities of Gaussian processes.

Scalable Meta-Learning for Bayesian Optimization

An ensemble model is developed that can incorporate the results of past optimization runs, while avoiding the poor scaling that comes with putting all results into a single Gaussian process model.

Initializing Bayesian Hyperparameter Optimization via Meta-Learning

This paper mimics a strategy human domain experts use: speed up optimization by starting from promising configurations that performed well on similar datasets, and substantially improves the state of the art for the more complex combined algorithm selection and hyperparameter optimization problem.

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset.

Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves

This paper mimics the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve, enabling state-of-the-art hyperparameter optimization methods for DNNs to find DNN settings that yield better performance than those chosen by human experts.

Scalable Gaussian process-based transfer surrogates for hyperparameter optimization

This work proposes to learn individual surrogate models on the observations of each data set and then combine all surrogates to a joint one using ensembling techniques, and extends the framework to directly estimate the acquisition function in the same setting, using a novel technique which is name the “transfer acquisition function”.

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

This work introduces a method to automatically design the BO search space by relying on evaluations of previous black-box functions, which depart from the common practice of defining a set of arbitrary search ranges a priori by considering search space geometries that are learnt from historical data.