# Linking Machine Learning with Multiscale Numerics: Data-Driven Discovery of Homogenized Equations

@article{Arbabi2020LinkingML, title={Linking Machine Learning with Multiscale Numerics: Data-Driven Discovery of Homogenized Equations}, author={Hassan Arbabi and J. E. Bunder and Giovanni Samaey and Anthony J. Roberts and Ioannis G. Kevrekidis}, journal={ArXiv}, year={2020}, volume={abs/2008.11276} }

The data-driven discovery of partial differential equations (PDEs) consistent with spatiotemporal data is experiencing a rebirth in machine learning research. Training deep neural networks to learn such data-driven partial differential operators requires extensive spatiotemporal data. For learning coarse-scale PDEs from computational fine-scale simulation data, the training data collection process can be prohibitively expensive. We propose to transformatively facilitate this training data…

## 13 Citations

A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations

- Computer Methods in Applied Mechanics and Engineering
- 2022

Explicit physics-informed neural networks for nonlinear closure: The case of transport in tissues

- Computer Science, PhysicsJ. Comput. Phys.
- 2022

A combination of formal upscaling and data-driven machine learning for explicitly closing a nonlinear transport and reaction process in a multiscale tissue and results in an upscaled PDE with an effectiveness factor that is predicted (implicitly) via the trained neural network.

Status and Challenges in Homogenization Methods for Lattice Materials

- Materials
- 2022

Lattice structures have shown great potential in that mechanical properties are customizable without changing the material itself. Lattice materials could be light and highly stiff as well. With this…

A Physics Informed Neural Network for Time-Dependent Nonlinear and Higher Order Partial Differential Equations

- Mathematics, Computer ScienceArXiv
- 2021

This work proposes a novel PINN scheme that solves the PDE sequentially over successive time segments using a single neural network while satisfying the already obtained solution for all previous time segments, and uses the Cahn Hilliard and Allen Cahn equations to illustrate the advantages.

Coarse-grained and Emergent Distributed Parameter Systems from Data

- Computer Science, Mathematics2021 American Control Conference (ACC)
- 2021

This work explores the derivation of distributed parameter system evolution laws (and in particular, partial differential operators and associated partial differential equations, PDEs) from spatiotemporal data through the use of manifold learning techniques in conjunction with neural network learning algorithms.

Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients

- Mathematics, Computer ScienceComputer Methods in Applied Mechanics and Engineering
- 2021

It is shown that a feedforward neural network with a single hidden layer with sigmoidal functions and fixed, random, internal weights and biases can be used to compute accurately a collocation solution, thus avoiding the time-consuming training phase.

Global and local reduced models for interacting, heterogeneous agents.

- Physics, MedicineChaos
- 2021

A data-driven coarse-graining methodology for discovering reduced models of coupled, heterogeneous agents that can be equally well reproduced by an all-to-all coupled and by a locally coupled model of the same agents.

Non-intrusive reduced-order models for parametric partial differential equations via data-driven operator inference

- Computer Science, MathematicsArXiv
- 2021

This work formulates a new approach to reduced modeling of parameterized, timedependent partial differential equations (PDEs) using Operator Inference, a scientific machine learning framework combining data-driven learning and physics-based modeling that can be solved rapidly to map parameter values to approximate PDE solutions.

On the Correspondence between Gaussian Processes and Geometric Harmonics

- Computer Science, MathematicsArXiv
- 2021

The correspondence between Gaussian process regression and Geometric Harmonics is discussed, providing alternative interpretations of uncertainty in terms of error estimation, or leading towards accelerated Bayesian Optimization due to dimensionality reduction.

Operator Compression with Deep Neural Networks

- Computer Science, MathematicsArXiv
- 2021

This paper proposes to directly approximate the coefficient-to-surrogate map with a neural network to enable large compression ratios and the online computation of a surrogate based on simple forward passes through the network is substantially accelerated compared to classical numerical upscaling approaches.

## References

SHOWING 1-10 OF 57 REFERENCES

Coarse-scale PDEs from fine-scale observations via machine learning

- Computer Science, MathematicsChaos
- 2020

A data-driven framework for the identification of unavailable coarse-scale PDEs from microscopic observations via machine-learning algorithms using Gaussian processes, artificial neural networks, and/or diffusion maps is introduced.

Learning data-driven discretizations for partial differential equations

- Medicine, MathematicsProceedings of the National Academy of Sciences
- 2019

Data-driven discretization is proposed, a method for learning optimized approximations to PDEs based on actual solutions to the known underlying equations that uses neural networks to estimate spatial derivatives.

Data-driven discovery of partial differential equations

- Medicine, Computer ScienceScience Advances
- 2017

The sparse regression method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation.

Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

- Computer ScienceJ. Comput. Phys.
- 2019

Abstract We introduce physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear…

An Emergent Space for Distributed Data With Hidden Internal Order Through Manifold Learning

- Computer Science, MedicineIEEE Access
- 2018

This work validates this “emergent space” reconstruction for time series sampled without space labels in known PDEs, and discusses how data-driven “spatial” coordinates can be extracted in ways invariant to the nature of the measuring instrument.

Hidden physics models: Machine learning of nonlinear partial differential equations

- Computer Science, MathematicsJ. Comput. Phys.
- 2018

Abstract While there is currently a lot of enthusiasm about “big data”, useful data is usually “small” and expensive to acquire. In this paper, we present a new paradigm of learning partial…

Variational system identification of the partial differential equations governing the physics of pattern-formation: Inference under varying fidelity and noise

- PhysicsComputer Methods in Applied Mechanics and Engineering
- 2019

Abstract We present a contribution to the field of system identification of partial differential equations (PDEs), with emphasis on discerning between competing mathematical models of pattern-forming…

Emergent Spaces for Coupled Oscillators

- Physics, Computer ScienceFrontiers in Computational Neuroscience
- 2020

A systematic, data-driven approach to discovering “bespoke” coarse variables based on manifold learning algorithms and an extension of the coarse-graining methodology which enables us to learn evolution equations for the discovered coarse variables via an artificial neural network architecture templated on numerical time integrators (initial value solvers).

Variational Physics-Informed Neural Networks For Solving Partial Differential Equations

- Computer Science, MathematicsArXiv
- 2019

A Petrov-Galerkin version of PINNs based on the nonlinear approximation of deep neural networks (DNNs) by incorporating the variational form of the problem into the loss function of the network and constructing a VPINN, effectively reducing the training cost in VPINNs while increasing their accuracy compared to PINNs that essentially employ delta test functions.

Equation-Free, Coarse-Grained Multiscale Computation: Enabling Mocroscopic Simulators to Perform System-Level Analysis

- Mathematics
- 2003

We present and discuss a framework for computer-aided multiscale analysis, which enables models at a fine (microscopic/stochastic) level of description to perform modeling tasks at a coarse…