Corpus ID: 54035096

Scalable Hyperparameter Transfer Learning

@inproceedings{Perrone2018ScalableHT,
  title={Scalable Hyperparameter Transfer Learning},
  author={Valerio Perrone and Rodolphe Jenatton and M. Seeger and C. Archambeau},
  booktitle={NeurIPS},
  year={2018}
}
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization. Typically, BO relies on conventional Gaussian process (GP) regression, whose algorithmic complexity is cubic in the number of evaluations. As a result, GP-based BO cannot leverage large numbers of past function evaluations, for example, to warm-start related BO runs. We propose a multi-task adaptive Bayesian linear regression model for transfer learning in… Expand
Few-Shot Bayesian Optimization with Deep Kernel Surrogates
Hyperparameter Transfer Learning with Adaptive Complexity
Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning
Meta-Learning Acquisition Functions for Bayesian Optimization
A Copula approach for hyperparameter transfer learning
Hyp-RL : Hyperparameter Optimization by Reinforcement Learning
Scalable and Flexible Deep Bayesian Optimization with Auxiliary Information for Scientific Problems
TRANSFER LEARNING IN BAYESIAN OPTIMIZATION
Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 50 REFERENCES
Scalable Bayesian Optimization Using Deep Neural Networks
Scalable Meta-Learning for Bayesian Optimization
Bayesian Optimization with Tree-structured Dependencies
Bayesian Optimization with Robust Bayesian Neural Networks
Multi-Task Bayesian Optimization
Initializing Bayesian Hyperparameter Optimization via Meta-Learning
Collaborative hyperparameter tuning
Hyperparameter Optimization with Factorized Multilayer Perceptrons
...
1
2
3
4
5
...