• Corpus ID: 226281776

Sparsely Constrained Neural Networks for Model Discovery of PDEs

@article{Both2021SparselyCN,
  title={Sparsely Constrained Neural Networks for Model Discovery of PDEs},
  author={Gert-Jan Both and Remy Kusters},
  journal={ArXiv},
  year={2021},
  volume={abs/2011.04336}
}
Sparse regression on a library of candidate features has developed as the prime method to discover the PDE underlying a spatio-temporal dataset. As these features consist of higher order derivatives, model discovery is typically limited to low-noise and dense datasets due to the erros inherent to numerical differentiation. Neural network-based approaches circumvent this limit, but to date have ignored advances in sparse regression algorithms. In this paper we present a modular framework that… 

Figures from this paper

Robust Data-Driven Discovery of Partial Differential Equations under Uncertainties
TLDR
Results of numerical case studies indicate that the governing PDEs of many canonical dynamical systems can be correctly identified using the proposed ψ-PDE method with highly noisy data.
Fully differentiable model discovery
TLDR
This paper starts by reinterpreting PINNs as multitask models, applying multitask learning using uncertainty, and shows that this leads to a natural framework for including Bayesian regression techniques, and builds a robust model discovery algorithm by using SBL.
Model discovery in the sparse sampling regime
TLDR
This work investigates how deep learning can improve model discovery of partial differential equations when the spacing between sensors is large and the samples are not placed on a grid, and shows how leveraging physics informed neural network interpolation and automatic differentiation, allow to better fit the data and its spatiotemporal derivatives, compared to more classic spline interpolations and numerical differentiation techniques.

References

SHOWING 1-10 OF 51 REFERENCES
PyTorch: An Imperative Style
  • 2019
API design for machine
  • 2013
API design for machine learning software: experiences from the scikit-learn project
TLDR
The simple and elegant interface shared by all learning and processing units in the Scikit-learn library is described and its advantages in terms of composition and reusability are discussed.
DeepMoD: Deep learning for model discovery in noisy data
A Unified Sparse Optimization Framework to Learn Parsimonious Physics-Informed Models From Data
TLDR
A flexible ML-based framework for learning governing models for physical systems from data that addresses three open challenges in scientific problems and data sets, including robust handling of outliers and corrupt data within noisy sensor measurements, parametric dependencies in candidate library functions, and the imposition of physical constraints.
Array programming with NumPy
TLDR
How a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data is reviewed.
Deep learning of physical laws from scarce data
TLDR
This work introduces a novel physics-informed deep learning framework to discover governing partial differential equations (PDEs) from scarce and noisy data for nonlinear spatiotemporal systems and shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture.
Digital Twin: Values, Challenges and Enablers From a Modeling Perspective
TLDR
This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.
Discovering physical concepts with neural networks
TLDR
This work models a neural network architecture after the human physical reasoning process, which has similarities to representation learning, and applies this method to toy examples to show that the network finds the physically relevant parameters, exploits conservation laws to make predictions, and can help to gain conceptual insights.
Implicit Neural Representations with Periodic Activation Functions
TLDR
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
...
1
2
3
4
5
...