• Corpus ID: 209333274

Emulation of physical processes with Emukit

@article{Paleyes2021EmulationOP,
  title={Emulation of physical processes with Emukit},
  author={Andrei Paleyes and Mark Pullin and Maren Mahsereci and Cliff McCollum and Neil D. Lawrence and Javier I. Gonz{\'a}lez},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.13293}
}
Decision making in uncertain scenarios is an ubiquitous challenge in real world systems. Tools to deal with this challenge include simulations to gather information and statistical emulation to quantify uncertainty. The machine learning community has developed a number of methods to facilitate decision making, but so far they are scattered in multiple different toolkits, and generally rely on a fixed backend. In this paper, we present Emukit, a highly adaptable Python toolkit for enriching… 

Figures from this paper

Uncertainty Aware System Identification with Universal Policies
TLDR
UncAPS is proposed, where Universal Policy Network is used to store simulationtrained task-specific policies across the full range of environmental parameters and then robust Bayesian optimisation is employed to craft robust policies for the given environment by combining relevant UPN policies in a DR like fashion.
Probabilistic Numerical Methods – From Theory to Implementation
TLDR
A simple, rigorous, and unified framework for solving and learning (possibly nonlinear) differential equations (PDEs and ODEs) using the framework of Gaussian pro-cesses/kernel methods and the choice of efficient prior distributions is presented.
Transfer Learning with Gaussian Processes for Bayesian Optimization
TLDR
A novel closed-form boosted GP transfer model is developed that bridges the gap between existing approaches in terms of complexity and highlights strengths and weaknesses of the dif-ferent transfer-learning methods.
Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters
TLDR
An approach to the identification of kernel hyperparameters in GP regression and related problems that sidesteps the need for costly marginal likelihoods is introduced, and a single neural model trained on synthetic data is able to generalize directly to several different unseen real-world GP use cases.
Bayesian Optimization is Superior to Random Search for Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020
TLDR
The results and insights from the black-box optimization (BBO) challenge at NeurIPS 2020 which ran from July–October, 2020 are presented.
Efficient emulation of relativistic heavy ion collisions with transfer learning
Measurements from the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) can be used to study the properties of quark-gluon plasma. Systematic constraints on these properties
Prior-guided Bayesian Optimization
TLDR
Prior-guided Bayesian Optimization (PrBO) allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users.
Machine Learning Simulates Agent-Based Model Towards Policy
Public Policies are not intrinsically positive or negative. Rather, policies provide varying levels of effects across different recipients. Methodologically, computational modeling enables the
πBO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization
TLDR
πBO is an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user, and is conceptually simple and can easily be integrated with existing libraries and many acquisition functions.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
Probabilistic numerics and uncertainty in computations
TLDR
It is shown that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance.
Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with Bayesian optimization
TLDR
Entropy Search is extended, a Bayesian optimization algorithm that maximizes information gain from each experiment, to the case of multiple information sources, and the result is a principled way to automatically combine cheap, but inaccurate information from simulations with expensive and accurate physical experiments in a cost-effective manner.
Bayesian Calibration of computer models
TLDR
A Bayesian calibration technique which improves on this traditional approach in two respects and attempts to correct for any inadequacy of the model which is revealed by a discrepancy between the observed data and the model predictions from even the best‐fitting parameter values is presented.
Uncertainty analysis and other inference tools for complex computer codes
TLDR
The basic Bayesian approach to the generic problem of inference for complex computer codes is reviewed, some recent advances about the distribution of quantile functions of the uncertainty distribution are presented, and the use of runs of the computer code at diierent levels of complexity to make eecient use of the quicker, cruder, versions of the code is presented.
Nonmyopic active learning of Gaussian processes: an exploration-exploitation approach
TLDR
An analysis and efficient algorithms are presented that address the question of when an active learning, or sequential design, strategy will perform significantly better than sensing at an a priori specified set of locations for Gaussian Processes.
Statistical emulation of climate model projections based on precomputed GCM runs
TLDR
A new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model, which captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates.
Gaussian process emulation of dynamic computer codes
TLDR
A novel iterative system is developed to build a statistical model of dynamic computer codes, which is demonstrated on a rainfall-runoff simulator.
Active Multi-Information Source Bayesian Quadrature
TLDR
This work sets the scene for active learning in BQ when multiple related information sources of variable cost are accessible, and demonstrates that active multi-source BQ (AMS-BQ) allocates budget more efficiently than VBQ for learning the integral to a good accuracy.
Gaussian Processes for Machine Learning
TLDR
The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics, and deals with the supervised learning problem for both regression and classification.
...
...