• Corpus ID: 235593001

Sparsistent Model Discovery

  title={Sparsistent Model Discovery},
  author={Georges Tod and Gert-Jan Both and Remy Kusters},
Discovering the partial differential equations underlying a spatio-temporal datasets from very limited observations is of paramount interest in many scientific fields. However, it remains an open question to know when model discovery algorithms based on sparse regression can actually recover the underlying physical processes. We trace back the poor of performance of Lasso based model discovery algorithms to its potential variable selection inconsistency: meaning that even if the true model is… 

Figures and Tables from this paper

Discovering PDEs from Multiple Experiments
A randomised adaptive group Lasso sparsity estimator is introduced to promote grouped sparsity and implement it in a deep learning based PDE discovery framework to create a learning bias that implies the a priori assumption that all experiments can be explained by the same underlying PDE terms with potentially different coefficients.


Machine Discovery of Partial Differential Equations from Spatiotemporal Data
The study presents a general framework for discovering underlying Partial Differential Equations (PDEs) using measured spatiotemporal data, built on the recent development of Sparse Bayesian Learning, which enforces the sparsity in the to-be-identified PDEs and can balance the model complexity and fitting error with theoretical guarantees.
Deep learning of physical laws from scarce data
This work introduces a novel physics-informed deep learning framework to discover governing partial differential equations (PDEs) from scarce and noisy data for nonlinear spatiotemporal systems and shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture.
Stability selection enables robust learning of partial differential equations from limited noisy data
This work proposes a stability-based model selection scheme to determine the level of regularization required for reproducible recovery of the underlying PDE, and shows that in particular the combination of stability selection with the iterative hard-thresholding algorithm from compressed sensing provides a fast, parameter-free, and robust computational framework for PDE inference that outperforms previous algorithmic approaches with respect to recovery accuracy, amount of data required and robustness to noise.
Data-driven discovery of partial differential equations
The sparse regression method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation.
Learning partial differential equations via data discovery and sparse optimization
  • H. Schaeffer
  • Computer Science
    Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2017
This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data, which uses sparse optimization in order to perform feature selection and parameter estimation.
Robust discovery of partial differential equations in complex situations
Results prove that the proposed R-DLGA is able to calculate derivatives accurately with the optimization of PINN and possesses surprising robustness to complex situations, including sparse data with high noise, high-order derivatives, and shock waves.
Statistical Learning with Sparsity: The Lasso and Generalizations
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
Data-Driven Identification of Parametric Partial Differential Equations
It is shown that group sequentially thresholded ridge regression outperforms group LASSO in identifying the fewest terms in the PDE along with their parametric dependency, and the method is demonstrated on four canonical models with and without the introduction of noise.