Neural Integro-Differential Equations

  title={Neural Integro-Differential Equations},
  author={Emanuele Zappal{\`a} and Antonio Henrique de Oliveira Fonseca and Andrew Henry Moberly and Michael J. Higley and Chadi G. Abdallah and Jessica A. Cardin and David van Dijk},
. Modeling continuous dynamical systems from discretely sampled observations is a fundamental problem in data science. Often, such dynamics are the result of non-local processes that present an integral over time. As such, these systems are modeled with Integro-Differential Equations (IDEs); generalizations of differential equations that comprise both an integral and a differential component. For example, brain dynamics are not accurately modeled by differential equations since their behavior is… 
1 Citations

Neural Integral Equations

An attentional version of NIE is introduced, called Attentional Neural Integral Equations (ANIE), where the integral is replaced by self-attention, which improves scalability and provides interpretability and it is shown that learning dynamics via integral equations is faster than doing so via other continuous methods, such as Neural ODEs.

Numerical analysis of finite difference schemes arising from time-memory partial integro-differential equations

This paper investigates the partial integro-differential equation of memory type numerically. The differential operator is discretized based on θ-finite difference schemes, while the integral



Neural Ordinary Differential Equations

This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.

Computation Through Neural Population Dynamics.

This work starts with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data, and focuses on studies spanning motor control, timing, decision-making, and working memory.

Learning Neural Event Functions for Ordinary Differential Equations

This work extends Neural ODEs to implicitly defined termination criteria modeled by neural event functions, which can be chained together and differentiated through, and proposes simulation-based training of point processes with applications in discrete control.

An unsupervised deep learning approach to solving partial integro-differential equations

This work investigates solving partial integro-differential equations (PIDEs) using unsupervised deep learning and employs a neural network as the candidate solution and trains the neural network to satisfy the PIDE.

Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators

A new deep neural network called DeepONet can lean various mathematical operators with small generalization error and can learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations.

DeepXDE: A Deep Learning Library for Solving Differential Equations

An overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation, and a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs.

Dynamic models of large-scale brain activity

Evidence supports the view that collective, nonlinear dynamics are central to adaptive cortical activity and aberrant dynamic processes appear to underlie a number of brain disorders.

IDSOLVER: A general purpose solver for nth-order integro-differential equations

Go with the FLOW: visualizing spatiotemporal dynamics in optical widefield calcium imaging

This work leverages analytic techniques from fluid dynamics to develop a visualization framework that highlights features of flow across the cortex, mapping time-varying sources and wave fronts that may be correlated with behavioral events.

Neural Operator: Learning Maps Between Function Spaces

A generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces, formulated by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators.