Meta-learning PINN loss functions

@article{Psaros2022MetalearningPL,
  title={Meta-learning PINN loss functions},
  author={Apostolos F. Psaros and Kenji Kawaguchi and George Em Karniadakis},
  journal={J. Comput. Phys.},
  year={2022},
  volume={458},
  pages={111121}
}
Training multi-objective/multi-task collocation physics-informed neural network with student/teachers transfer learnings
This paper presents a PINN training framework that employs (1) pre-training steps that accelerates and improve the robustness of the training of physics-informed neural network with auxiliary data
Machine Learning in Heterogeneous Porous Materials
TLDR
This chapter discusses multi-scale modeling in heterogeneous porous materials via ML in porous and fractured media and recommendations to advance the field in ten years.
Deep Random Vortex Method for Simulation and Inference of Navier-Stokes Equations
TLDR
The Deep Random Vortex Method (DRVM), which combines the neural network with a random vortex dynamics system equivalent to the Navier-Stokes equation, and significantly outperforms existing PINN method.
A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks
TLDR
It is shown that the proposed adaptive sampling methods of RAD and RAR-D significantly improve the accuracy of PINNs with fewer residual points for both forward and inverse problems.
DRVN (Deep Random Vortex Network): A new physics-informed machine learning method for simulating and inferring incompressible fluid flows
TLDR
The deep random vortex network (DRVN), a novel physics-informed framework for simulating and inferring the incompressible Navier–Stokes equations, achieves a 2 orders of magnitude improvement for the training time with significantly precise estimates.
On NeuroSymbolic Solutions for PDEs
TLDR
To approximate complex functions with respect to Domain-splitting assisted approach on LaSalle-Bouchut inequality.
Accelerating numerical methods by gradient-based meta-solving
TLDR
This paper formulates a general framework to describe these problems, and proposes a gradient-based algorithm to solve them in a unified way, and demonstrates the performance and versatility of this method through theoretical analysis and numerical experiments.

References

SHOWING 1-10 OF 36 REFERENCES
Meta Learning via Learned Loss
TLDR
This paper presents a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures, and develops a pipeline for “meta-training” such loss functions, targeted at maximizing the performance of the model trained under them.
Meta-Learning in Neural Networks: A Survey
TLDR
A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.
On First-Order Meta-Learning Algorithms
TLDR
A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.
Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment
TLDR
This work proposes a sample efficient reinforcement learning approach for adapting the loss dynamically during training and empirically shows how this formulation improves performance by simultaneously optimizing the evaluation metric and smoothing the loss landscape.
DeepXDE: A Deep Learning Library for Solving Differential Equations
TLDR
An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.
Reward Shaping via Meta-Learning
TLDR
A general meta-learning framework is proposed to automatically learn the efficient reward shaping on newly sampled tasks, assuming only shared state space but not necessarily action space, and derives the theoretically optimal reward shaping in terms of credit assignment in model-free RL.
Meta-Learning with Implicit Gradients
TLDR
Theoretically, it is proved that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost.
Learning to Learn: Meta-Critic Networks for Sample Efficient Learning
TLDR
A meta-critic approach to meta-learning is proposed: an action-value function neural network that learns to criticise any actor trying to solve any specified task in a trainable task-parametrised loss generator.
Generalized Inner Loop Meta-Learning
TLDR
This paper gives a formalization of a shared pattern of approximating the solution to a nested optimization problem, which it is called GIMLI, proves its general requirements, and derives a general-purpose algorithm for implementing similar approaches.
...
...