Corpus ID: 236635178

Connections between Numerical Algorithms for PDEs and Neural Networks

@article{Alt2021ConnectionsBN,
  title={Connections between Numerical Algorithms for PDEs and Neural Networks},
  author={Tobias Alt and Karl N. Schrader and Matthias Augustin and Pascal Peter and Joachim Weickert},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.14742}
}
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural architectures. Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks. Besides structural insights we provide concrete examples and experimental evaluations of the resulting architectures. Using the example of generalised nonlinear diffusion in 1D, we consider explicit schemes, acceleration strategies thereof… Expand

Figures and Tables from this paper

Quantized convolutional neural networks through the lens of partial differential equations
TLDR
It is demonstrated through several experiments that the property of forward stability preserves the action of a network under different quantization rates, and stable quantized networks behave similarly to their non-quantized counterparts even though they rely on fewer parameters. Expand
Learning Sparse Masks for Diffusion-based Image Inpainting
TLDR
By emulating the complete inPainting pipeline with two networks for mask generation and neural surrogate inpainting, this work obtains a model for highly efficient adaptive mask generation that can achieve competitive quality with an acceleration by as much as four orders of magnitude. Expand

References

SHOWING 1-10 OF 110 REFERENCES
Translating Numerical Concepts for PDEs into Neural Architectures
TLDR
The findings give a numerical perspective on the success of modern neural network architectures, and they provide design criteria for stable networks. Expand
Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations
TLDR
It is shown that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations and established a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Expand
A neural network multigrid solver for the Navier-Stokes equations
TLDR
DNN-MG improves computational efficiency using a judicious combination of a geometric multigrid solver and a recurrent neural network with memory for the instationary Navier-Stokes equations and is demonstrated for variations of the 2D laminar flow around an obstacle. Expand
Learning to Optimize Multigrid PDE Solvers
TLDR
This paper proposes a framework for learning multigrid solvers, and learns a (single) mapping from a family of parameterized PDEs to prolongation operators, using an efficient and unsupervised loss function. Expand
Meta-Solver for Neural Ordinary Differential Equations
TLDR
It is shown that the model robustness can be further improved by optimizing solver choice for a given task, and the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks. Expand
Deep Neural Networks Motivated by Partial Differential Equations
TLDR
A new PDE interpretation of a class of deep convolutional neural networks (CNN) that are commonly used to learn from speech, image, and video data is established and three new ResNet architectures are derived that fall into two new classes: parabolic and hyperbolic CNNs. Expand
Black-box learning of multigrid parameters
TLDR
This paper implements the GMG method in a modern machine learning framework that can automatically compute the gradients of the introduced convergence functional with respect to restriction and prolongation operators and demonstrates that proposed approach gives operators, which lead to faster convergence. Expand
Deep Limits of Residual Neural Networks
Neural networks have been very successful in many applications; we often, however, lack a theoretical understanding of what the neural networks are actually learning. This problem emerges when tryingExpand
Deep Neural Network Structures Solving Variational Inequalities
TLDR
It is shown that the limit of the resulting process solves a variational inequality which, in general, does not derive from a minimization problem. Expand
Layer-Parallel Training of Deep Residual Neural Networks
TLDR
Using numerical examples from supervised classification, it is demonstrated that the new approach achieves similar training performance to traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency. Expand
...
1
2
3
4
5
...