• Corpus ID: 243938424

Uniform Convergence Guarantees for the Deep Ritz Method for Nonlinear Problems

  title={Uniform Convergence Guarantees for the Deep Ritz Method for Nonlinear Problems},
  author={Patrick W. Dondl and Johannes M{\"u}ller and Marius Zeinhofer},
We provide convergence guarantees for the Deep Ritz Method for abstract variational energies. Our results cover non-linear variational problems such as the p-Laplace equation or the Modica-Mortola energy with essential or natural boundary conditions. Under additional assumptions, we show that the convergence is uniform across bounded families of right-hand sides. 

Figures from this paper

Error Estimates for the Deep Ritz Method with Boundary Penalty
Estimates on the error made by the Deep Ritz Method for elliptic problems on the space H(Ω) with different boundary conditions are established and the optimal decay rate of the estimated error is min(s/2, r) and achieved by choosing λn ∼ n.


The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems
A deep learning-based method, the Deep Ritz Method, for numerically solving variational problems, particularly the ones that arise from partial differential equations, which is naturally nonlinear, naturally adaptive and has the potential to work in rather high dimensions.
Error Analysis of Deep Ritz Methods for Elliptic Equations
This paper establishes the first nonasymptotic convergence rate in H norm for DRM using deep networks with smooth activation functions including logistic and hyperbolic tangent functions.
Robin Pre-Training for the Deep Ritz Method
A novel method to compensate this problem is proposed using a small penalization strength to pre-train the model before the main training on the target Penalization strength is conducted, and numerical and theoretical evidence that the proposed method is beneficial is presented.
Convergence Rate Analysis for Deep Ritz Method
A rigorous numerical analysis on deep Ritz method (DRM) for second order elliptic equations with Neumann boundary conditions is provided and the first nonasymptotic convergence rate in H norm for DRM is established using deep networks with ReLU activation functions.
DGM: A deep learning algorithm for solving partial differential equations
Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality
Empirically, following recent work which has shown that the deep model accuracy will improve with growing training sets according to a power law, this paper supply computational experiments to show a similar behavior of dimension dependent power law for deep PDE solvers.
Functional Analysis, Sobolev Spaces and Partial Differential Equations
Preface.- 1. The Hahn-Banach Theorems. Introduction to the Theory of Conjugate Convex Functions.- 2. The Uniform Boundedness Principle and the Closed Graph Theorem. Unbounded Operators. Adjoint.
A note on Poincaré- and Friedrichs-type inequalities
We introduce a simple criterion to check coercivity of bilinear forms on subspaces of Hilbert-spaces. The presented criterion allows to derive many standard and non-standard variants of Poincar\'e-
Understanding and mitigating gradient pathologies in physics-informed neural networks
This work reviews recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data and proposes a novel neural network architecture that is more resilient to gradient pathologies.
Neural network representation of finite element method