On transfer learning of neural networks using bi-fidelity data for uncertainty propagation

@article{De2020OnTL,
  title={On transfer learning of neural networks using bi-fidelity data for uncertainty propagation},
  author={Subhayan De and Jolene Britton and Matthew J. Reynolds and Ryan W. Skinner and Kenneth Jansen and Alireza Doostan},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.04495}
}
Due to their high degree of expressiveness, neural networks have recently been used as surrogate models for mapping inputs of an engineering system to outputs of interest. Once trained, neural networks are computationally inexpensive to evaluate and remove the need for repeated evaluations of computationally expensive models in uncertainty quantification applications. However, given the highly parameterized construction of neural networks, especially deep neural networks, accurate training… 

Neural Network Training Using 𝓁1-Regularization and Bi-fidelity Data

Transfer learning based multi-fidelity physics informed deep neural network

Bi-fidelity Modeling of Uncertain and Partially Unknown Systems using DeepONets

This paper proposes a bi-fidelity modeling approach for complex physical systems, where the discrepancy between the true system’s response and a low-delity response in the presence of a small training dataset is modeled using a deep operator network (DeepONet), a neural network architecture suitable for approximating nonlinear operators.

Multi-fidelity wavelet neural operator with application to uncertainty quantification

A new framework based on the wavelet neural operator which is capable of learning from a multi-fidelity dataset is developed and its excellent learning capabilities are demonstrated by solving different problems which require effective correlation learning between the two fi delities for surrogate construction.

Multi-fidelity Hierarchical Neural Processes

This work proposes Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-f fidelity surrogate modeling that shows great promise for speeding up high-dimensional complex simulations.

Multifidelity data fusion in convolutional encoder/decoder networks

Multi-fidelity Hierarchical Neural Processes for Climate Modeling

  • D. Wu
  • Computer Science
  • 2022
Multifidelity Hierarchical Neural Processes, a unified neural latent variable model for multi-fidelity surrogate modeling, shows great promise for speeding up high-dimensional climate simulations and achieving competitive performance in terms of accuracy and uncertainty estimation.

Quadrature Sampling of Parametric Models with Bi-fidelity Boosting

A novel boosting approach is presented that leverages cheaper, lower-fidelity data of the problem at hand to identify the best sketch among a set of candidate sketches, and derives a bound on the residual norm of the BFB sketched solution relating it to its ideal, but computationally expensive, high-delity boosted counterpart.

B I - FIDELITY M ODELING OF U NCERTAIN AND P ARTIALLY U NKNOWN S YSTEMS USING D EEP ON ETS

  • Computer Science
  • 2022
This paper proposes a bi-fidelity modeling approach for complex physical systems, where the discrepancy between the true system’s response and low-delity response in the presence of a small training dataset is modeled using a deep operator network (DeepONet), a neural network architecture suitable for approximating nonlinear operators.

References

SHOWING 1-10 OF 75 REFERENCES

Fidelity-Weighted Learning

Fidelity-weighted learning (FWL) is proposed, a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data that makes better use of strong and weak labels, and leads to better task-dependent data representations.

Review of multi-fidelity models

It is found that time savings are highly problem dependent and that MFM methods provided time savings up to 90% and guidelines for authors to present their MFM savings in a way that is useful to future MFM users are included.

Characterizing and Avoiding Negative Transfer

A novel technique is proposed to circumvent negative transfer by filtering out unrelated source data based on adversarial networks, which is highly generic and can be applied to a wide range of transfer learning algorithms.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Neural Ranking Models with Weak Supervision

This paper proposes to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources, and suggests that supervised neural ranking models can greatly benefit from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models.

Bi-fidelity stochastic gradient descent for structural optimization under uncertainty

The results show that the proposed use of a bi-fidelity approach for the SGD method can improve the convergence, and two analytical proofs are provided that show linear convergence of these two algorithms under appropriate assumptions.

TensorFlow: A system for large-scale machine learning

The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.
...