MFNets: data efficient all-at-once learning of multifidelity surrogates as directed networks of information sources

@article{Gorodetsky2020MFNetsDE,
  title={MFNets: data efficient all-at-once learning of multifidelity surrogates as directed networks of information sources},
  author={Alex Arkady Gorodetsky and John D. Jakeman and Gianluca Geraci},
  journal={Computational Mechanics},
  year={2020},
  volume={68},
  pages={741 - 758}
}
We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy. The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective. While the vast majority of state-of-the-art assumes hierarchical connections between information sources, our approach works with flexibly structured information sources that may not… 

Context-aware learning of hierarchies of low-fidelity models for multi-fidelity uncertainty quantification

The proposed context-aware multi-fidelity Monte Carlo method applies to hierarchies of a wide range of types of low-⬁Delity models such as sparse-grid and deep-network models and takes into account the context in which the learning models will be used in upstream tasks.

General multi-fidelity surrogate models: Framework and active learning strategies for efficient rare event simulation

A robust multi-fidelity surrogate modeling strategy in which the multi- fidelity surrogate is assembled using an active learning strategy using an on-the-fly model adequacy assessment set within a subset simulation framework for efficient reliability analysis is presented.

Efficient Multifidelity Likelihood-Free Bayesian Inference with Adaptive Computational Resource Allocation

This work provides an adaptive multifidelity likelihood-free inference algorithm that learns the relationships between models at different fidelities and adapts resource allocation accordingly, and demonstrates that this algorithm produces posterior estimates with near-optimal efficiency.

Improving Bayesian networks multifidelity surrogate construction with basis adaptation

The Model Forest Ensemble Kalman Filter

A possible way to make use of this collection of models in data assimilation by generalizing the idea of model hierarchies into model forests—collections of high and low fidelity models organized in a groping of model trees such as to capture various relationships between different models.

Learning finite element convergence with the Multi-fidelity Graph Neural Network

References

SHOWING 1-10 OF 34 REFERENCES

An adaptive surrogate modeling based on deep neural networks for large-scale Bayesian inverse problems

This work presents an adaptive multi-fidelity surrogate modeling framework based on deep neural networks (DNNs), motivated by the facts that the DNNs can potentially handle functions with limited regularity and are powerful tools for high dimensional approximations.

Cope with diverse data structures in multi-fidelity modeling: A Gaussian process method

Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling

A probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends is put forth.

Optimal Model Management for Multifidelity Monte Carlo Estimation

This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models and shows that a unique analytic solution of the model management optimization problem exists under mild conditions on the models.

Multifidelity Uncertainty Quantification Using Non-Intrusive Polynomial Chaos and Stochastic Collocation

An adaptive greedy multifidelity approach is proposed in which the generalized sparse grid concept is extended to consider candidate index set refinements drawn from multiple sparse grids and it is demonstrated that the multif fidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance is reduced relative to the variance of the high fidelity model.

Stochastic spectral methods for efficient Bayesian solution of inverse problems

Second-Order Corrections for Surrogate-Based Optimization with Model Hierarchies

It is demonstrated that first-order consistency can be insufficient to achieve acceptable convergence rates in practice and new second-order additive, multiplicative, and combined corrections which can significantly accelerate convergence are presented.