• Corpus ID: 238744096

Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality

  title={Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality},
  author={Yiping Lu and Haoxuan Chen and Jianfeng Lu and Lexing Ying and Jos{\'e} H. Blanchet},
In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). To simplify the problem, we focus on a prototype elliptic PDE: the Schr\"odinger equation on a hypercube with zero Dirichlet boundary condition, which has wide application in the quantum-mechanical systems. We establish upper and lower bounds for both methods, which… 

Figures and Tables from this paper

Error Estimates for the Deep Ritz Method with Boundary Penalty
Estimates on the error made by the Deep Ritz Method for elliptic problems on the space H(Ω) with different boundary conditions are established and the optimal decay rate of the estimated error is min(s/2, r) and achieved by choosing λn ∼ n.
Uniform Convergence Guarantees for the Deep Ritz Method for Nonlinear Problems
This work provides convergence guarantees for the Deep Ritz Method for abstract variational energies such as the p-Laplace equation or the Modica-Mortola energy with essential or natural boundary conditions.
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
An implicit acceleration of using a Sobolev norm as the objective function for training is explained, inferring that the optimal number of epochs of DRM becomes larger than the number of PINN when both the data size and the hardness of tasks increase, although both DRM and PINN can achieve statistical optimality.
A Rate of Convergence of Physics Informed Neural Networks for the Linear Second Order Elliptic PDEs
The convergence rate to PINNs is proved for the second order elliptic equations with Dirichlet boundary condition, by establishing the upper bounds on the number of training samples, depth and width of the deep neural networks to achieve desired accuracy.