• Corpus ID: 235731646

Deep Gaussian Process Emulation using Stochastic Imputation

@article{Ming2021DeepGP,
  title={Deep Gaussian Process Emulation using Stochastic Imputation},
  author={Deyu Ming and Daniel Williamson and Serge Guillas},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.01590}
}
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using stochastic imputation. By stochastically imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs. This transformation renders a simple while efficient DGP training procedure that only involves optimizations of conventional stationary GPs. In addition, the analytically tractable mean… 
Probabilistic, high-resolution tsunami predictions in northern Cascadia by exploiting sequential design for efficient emulation
Abstract. The potential of a full-margin rupture along the Cascadia subduction zone poses a significant threat over a populous region of North America. Previous probabilistic tsunami hazard

References

SHOWING 1-10 OF 43 REFERENCES
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
TLDR
This work provides evidence for the non-Gaussian nature of the posterior and applies the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples, which results in significantly better predictions at a lower computational cost than its VI counterpart.
Doubly Stochastic Variational Inference for Deep Gaussian Processes
TLDR
This work presents a doubly stochastic variational inference algorithm, which does not force independence between layers in Deep Gaussian processes, and provides strong empirical evidence that the inference scheme for DGPs works well in practice in both classification and regression.
Active Learning for Deep Gaussian Process Surrogates
TLDR
This work transport a DGP's automatic warping of the input space and full uncertainty quantification, via a novel elliptical slice sampling (ESS) Bayesian posterior inferential scheme, through to active learning (AL) strategies that distribute runs non-uniformly in theinput space -- something an ordinary (stationary) GP could not do.
Deep Gaussian Processes for Regression using Approximate Expectation Propagation
TLDR
A new approximate Bayesian learning scheme is developed that enables DGPs to be applied to a range of medium to large scale regression problems for the first time and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks.
Sequential Inference for Deep Gaussian Process
TLDR
This paper proposes an efficient sequential inference framework for DGP, where the data is processed sequentially, and proposes two DGP extensions to handle heteroscedasticity and multi-task learning.
RobustGaSP: Robust Gaussian Stochastic Process Emulation in R
TLDR
This package implements a marginal posterior mode estimator, for special priors and parameterizations, an estimation method that meets the robust parameter estimation criteria discussed in Gu 2016hesis and Gu 2016robustness, to improve predictive performance of the GaSP emulator.
Sparse Gaussian Processes using Pseudo-inputs
TLDR
It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.
Computer Emulation with Nonstationary Gaussian Processes
TLDR
An easily implemented nonstationary GP emulator is proposed, based on two stationary GPs, one nested into the other, and its superior ability in handling local features and selecting future input points from the boundaries of such GPs is demonstrated.
Parallel MCMC with generalized elliptical slice sampling
TLDR
A parallelizable Markov chain Monte Carlo algorithm for effciently sampling from continuous probability distributions that can take advantage of hundreds of cores and shares information between parallel Markov chains to build a scale-location mixture of Gaussians approximation to the density function of the target distribution.
A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms
Abstract The first part of this article presents the Monte Carlo implementation of the E step of the EM algorithm. Given the current guess to the maximizer of the posterior distribution, latent data
...
1
2
3
4
5
...