## Figures and Tables from this paper

## 8 Citations

Training multi-objective/multi-task collocation physics-informed neural network with student/teachers transfer learnings

- Computer ScienceArXiv
- 2021

This paper presents a PINN training framework that employs (1) pre-training steps that accelerates and improve the robustness of the training of physics-informed neural network with auxiliary data…

Machine Learning in Heterogeneous Porous Materials

- Computer ScienceArXiv
- 2022

This chapter discusses multi-scale modeling in heterogeneous porous materials via ML in porous and fractured media and recommendations to advance the field in ten years.

Deep Random Vortex Method for Simulation and Inference of Navier-Stokes Equations

- Computer ScienceArXiv
- 2022

The Deep Random Vortex Method (DRVM), which combines the neural network with a random vortex dynamics system equivalent to the Navier-Stokes equation, and significantly outperforms existing PINN method.

A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks

- Computer Science, MathematicsArXiv
- 2022

It is shown that the proposed adaptive sampling methods of RAD and RAR-D signiﬁcantly improve the accuracy of PINNs with fewer residual points for both forward and inverse problems.

DRVN (Deep Random Vortex Network): A new physics-informed machine learning method for simulating and inferring incompressible ﬂuid ﬂows

- Computer Science
- 2022

The deep random vortex network (DRVN), a novel physics-informed framework for simulating and inferring the incompressible Navier–Stokes equations, achieves a 2 orders of magnitude improvement for the training time with signiﬁcantly precise estimates.

On NeuroSymbolic Solutions for PDEs

- PsychologyArXiv
- 2022

To approximate complex functions with respect to Domain-splitting assisted approach on LaSalle-Bouchut inequality.

Accelerating numerical methods by gradient-based meta-solving

- Computer ScienceArXiv
- 2022

This paper formulates a general framework to describe these problems, and proposes a gradient-based algorithm to solve them in a unified way, and demonstrates the performance and versatility of this method through theoretical analysis and numerical experiments.

## References

SHOWING 1-10 OF 36 REFERENCES

Meta Learning via Learned Loss

- Computer Science2020 25th International Conference on Pattern Recognition (ICPR)
- 2021

This paper presents a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures, and develops a pipeline for “meta-training” such loss functions, targeted at maximizing the performance of the model trained under them.

Meta-Learning in Neural Networks: A Survey

- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2021

A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.

On First-Order Meta-Learning Algorithms

- Computer ScienceArXiv
- 2018

A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.

Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment

- Computer ScienceICML
- 2019

This work proposes a sample efficient reinforcement learning approach for adapting the loss dynamically during training and empirically shows how this formulation improves performance by simultaneously optimizing the evaluation metric and smoothing the loss landscape.

DeepXDE: A Deep Learning Library for Solving Differential Equations

- Computer ScienceAAAI Spring Symposium: MLPS
- 2020

An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.

Reward Shaping via Meta-Learning

- Computer ScienceArXiv
- 2019

A general meta-learning framework is proposed to automatically learn the efficient reward shaping on newly sampled tasks, assuming only shared state space but not necessarily action space, and derives the theoretically optimal reward shaping in terms of credit assignment in model-free RL.

Adaptive activation functions accelerate convergence in deep and physics-informed neural networks

- Computer ScienceJ. Comput. Phys.
- 2020

Meta-Learning with Implicit Gradients

- Computer ScienceNeurIPS
- 2019

Theoretically, it is proved that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost.

Learning to Learn: Meta-Critic Networks for Sample Efficient Learning

- Computer ScienceArXiv
- 2017

A meta-critic approach to meta-learning is proposed: an action-value function neural network that learns to criticise any actor trying to solve any specified task in a trainable task-parametrised loss generator.

Generalized Inner Loop Meta-Learning

- Computer ScienceArXiv
- 2019

This paper gives a formalization of a shared pattern of approximating the solution to a nested optimization problem, which it is called GIMLI, proves its general requirements, and derives a general-purpose algorithm for implementing similar approaches.