Corpus ID: 233388202

Scalable and Flexible Deep Bayesian Optimization with Auxiliary Information for Scientific Problems

@article{Kim2021ScalableAF,
  title={Scalable and Flexible Deep Bayesian Optimization with Auxiliary Information for Scientific Problems},
  author={Samuel Kim and Peter Y. Lu and Charlotte Loh and Jamie A. Smith and Jasper Snoek and M. Soljavci'c},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.11667}
}
Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely black-box. The data may have some known structure, e.g. symmetries, and the data generation process can yield useful intermediate or auxiliary information in addition to the value of the optimization objective. However, surrogate models traditionally employed in BO, such as Gaussian Processes (GPs), scale poorly with dataset… Expand

References

SHOWING 1-10 OF 52 REFERENCES
Scalable Bayesian Optimization Using Deep Neural Networks
TLDR
This work shows that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically, which allows for a previously intractable degree of parallelism. Expand
Scalable Hyperparameter Transfer Learning
TLDR
This work proposes a multi-task adaptive Bayesian linear regression model for transfer learning in BO, whose complexity is linear in the function evaluations: one Bayesianlinear regression model is associated to each black-box function optimization problem (or task), while transfer learning is achieved by coupling the models through a shared deep neural net. Expand
Bayesian Optimization with Robust Bayesian Neural Networks
TLDR
This work presents a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible and obtaining scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness is improved via a scale adaptation. Expand
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
TLDR
A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. Expand
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Expand
Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems
TLDR
A new method is proposed with the objective of endowing the DNN with uncertainty quantification for both sources of uncertainty, i.e., the parametric uncertainty and the approximation uncertainty, which can be readily applied to other types of stochastic PDEs in multi-dimensions. Expand
COMBO: An efficient Bayesian optimization library for materials science
TLDR
An efficient protocol for Bayesian optimization that employs Thompson sampling, random feature maps, one-rank Cholesky update and automatic hyperparameter tuning is designed and implemented as an open-source python library called COMBO (COMmon Bayesian Optimization library). Expand
Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations
TLDR
This two part treatise introduces physics informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations and demonstrates how these networks can be used to infer solutions topartial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters. Expand
Deep Kernel Learning
We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods. Specifically, we transform the inputsExpand
Efficient Global Optimization of Expensive Black-Box Functions
TLDR
This paper introduces the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering and shows how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. Expand
...
1
2
3
4
5
...