• Corpus ID: 81978081

Deep Gaussian Processes for Multi-fidelity Modeling

@article{Cutajar2019DeepGP,
  title={Deep Gaussian Processes for Multi-fidelity Modeling},
  author={Kurt Cutajar and Mark Pullin and Andreas C. Damianou and Neil D. Lawrence and Javier I. Gonz{\'a}lez},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.07320}
}
Multi-fidelity methods are prominently used when cheaply-obtained, but possibly biased and noisy, observations must be effectively combined with limited or expensive true data in order to construct reliable models. This arises in both fundamental machine learning procedures such as Bayesian optimization, as well as more practical science and engineering applications. In this paper we develop a novel multi-fidelity model which treats layers of a deep Gaussian process as fidelity levels, and uses… 

Figures and Tables from this paper

Conditional Deep Gaussian Processes: Multi-Fidelity Kernel Learning

The conditional DGP model is proposed, in which the latent GPs are directly supported by the fixed lower fidelity data and the effective kernel encodes the inductive bias for true function allowing the compositional freedom.

Multi-fidelity modeling with different input domain definitions using deep Gaussian processes

Deep Gaussian processes for multi-fidelity (MF-DGP) are extended to the case where a different parametrization is used for each fidelity, assessed on analytical test cases and on structural and aerodynamic real physical problems.

Deep Multi-Fidelity Active Learning of High-dimensional Outputs

This paper develops a deep neural network-based multi-fidelity model for learning with high-dimensional outputs, which can flexibly, efficiently capture all kinds of complex relationships across the outputs and fidelities to improve prediction, and proposes a mutual information-based acquisition function that extends the predictive entropy principle.

Multi-fidelity Hierarchical Neural Processes

This work proposes Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-f fidelity surrogate modeling that shows great promise for speeding up high-dimensional complex simulations.

Active Learning for Deep Gaussian Process Surrogates

This work transport a DGP's automatic warping of the input space and full uncertainty quantification, via a novel elliptical slice sampling (ESS) Bayesian posterior inferential scheme, through to active learning (AL) strategies that distribute runs non-uniformly in theinput space -- something an ordinary (stationary) GP could not do.

Multi-resolution Multitask Gaussian Processes

This work develops shallow Gaussian Process (GP) mixtures that approximate the difficult to estimate joint likelihood with a composite one and deep GP constructions that naturally handle biases in the mean and generalize and outperform state of the art GP compositions and offer information-theoretic corrections and efficient variational approximations.

Multi-resolution Multi-task Gaussian Processes

This work develops shallow Gaussian Process (GP) mixtures that approximate the difficult to estimate joint likelihood with a composite one and deep GP constructions that naturally handle biases in the mean and generalize and outperform state of the art GP compositions and offer information-theoretic corrections and efficient variational approximations.

Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks

This paper uses a set of Bayesian neural networks to construct a fully auto-regressive model, which is expressive enough to capture strong yet complex relationships across all the fidelities, so as to improve the surrogate learning and optimization performance.
...

References

SHOWING 1-10 OF 36 REFERENCES

Cope with diverse data structures in multi-fidelity modeling: A Gaussian process method

Deep Multi-fidelity Gaussian Processes

We develop a novel multi-fidelity framework that goes far beyond the classical AR(1) Co-kriging scheme of Kennedy and O'Hagan (2000). Our method can handle general discontinuous cross-correlations

Multi-Fidelity Black-Box Optimization with Hierarchical Partitions

This work develops tree-search based multi-fidelity algorithms with theoretical guarantees on simple regret and demonstrates the performance gains of the algorithms on both real and synthetic datasets.

Variational Auto-encoded Deep Gaussian Processes

A new formulation of the variational lower bound is derived that allows for most of the computation to be distributed in a way that enables to handle datasets of the size of mainstream deep learning tasks.

Random Feature Expansions for Deep Gaussian Processes

A novel formulation of DGPs based on random feature expansions that is trained using stochastic variational inference and yields a practical learning framework which significantly advances the state-of-the-art in inference for DGPs, and enables accurate quantification of uncertainty.

Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling

A probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends is put forth.

Deep Gaussian processes and variational propagation of uncertainty

The results show that the developed variational methodologies improve practical applicability by enabling automatic capacity control in the models, even when data are scarce.

Large scale variable fidelity surrogate modeling

Two approaches to circumvent computational burden of the Gaussian process regression framework are proposed, one based on the Nyström approximation of sample covariance matrices and another based on an intelligent usage of a blackbox that can evaluate a low fidelity function on the fly at any point of a design space.

Avoiding pathologies in very deep networks

It is shown that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit.

Manifold Gaussian Processes for regression

Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task.