• Corpus ID: 239998114

Polynomial-Spline Neural Networks with Exact Integrals

  title={Polynomial-Spline Neural Networks with Exact Integrals},
  author={Jonas A. Actor and Andrew Huang and Nathaniel Trask},
Using neural networks to solve variational problems, and other scientific machine learning tasks, has been limited by a lack of consistency and an inability to exactly integrate expressions involving neural network architectures. We address these limitations by formulating a novel neural network architecture incorporating free knot B1-spline basis functions into a polynomial mixture-of-experts model. Effectively, our architecture performs piecewise polynomial approximation on each cell of a… 


Partition of Unity Networks: Deep HP-Approximation
Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials.
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
Abstract We introduce physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear
Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint
The adoption of an adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer, providing analysis of these techniques and illustrating via numerical examples dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNN's are currently used.
Mad Max: Affine Spline Insights Into Deep Learning
A rigorous bridge between deep networks (DNs) and approximation theory via spline functions and operators is built and a simple penalty term is proposed that can be added to the cost function of any DN learning algorithm to force the templates to be orthogonal with each other.
Understanding and mitigating gradient pathologies in physics-informed neural networks
This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies.
Relu Deep Neural Networks and Linear Finite Elements
In this paper, we investigate the relationship between deep neural networks (DNN) with rectified linear unit (ReLU) function as the activation function and continuous piecewise linear (CPWL)
Understanding Deep Neural Networks with Rectified Linear Units
The gap theorems hold for smoothly parametrized families of "hard" functions, contrary to countable, discrete families known in the literature, and a new lowerbound on the number of affine pieces is shown, larger than previous constructions in certain regimes of the network architecture.
Adam: A Method for Stochastic Optimization
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Full error analysis for the training of deep neural networks
The main contribution of this work is to provide a full error analysis which covers each of the three different sources of errors usually emerging in deep learning algorithms and which merges these three Sources of errors into one overall error estimate for the considered deep learning algorithm.
The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems
A deep learning-based method, the Deep Ritz Method, for numerically solving variational problems, particularly the ones that arise from partial differential equations, which is naturally nonlinear, naturally adaptive and has the potential to work in rather high dimensions.