Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent

@article{Lu2022SobolevAA,
  title={Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent},
  author={Yiping Lu and Jos{\'e} H. Blanchet and Lexing Ying},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.07331}
}
In this paper, we study the statistical limits in terms of Sobolev norms of gradient descent for solving inverse problem from randomly sampled noisy observations using a general class of objective functions. Our class of objective functions includes Sobolev training for kernel regression, Deep Ritz Methods (DRM), and Physics Informed Neural Networks (PINN) for solving elliptic partial differential equations (PDEs) as special cases. We consider a potentially infinite-dimensional parameterization… 

Figures from this paper

Minimax Optimal Kernel Operator Learning via Multilevel Training

This paper establishes the information-theoretic lower bound in terms of the Sobolev Hilbert-Schmidt norm and shows that a regularization that learns the spectral components below the bias contour and ignores the ones that above the variance contour can achieve optimal learning rate.

References

SHOWING 1-10 OF 87 REFERENCES

A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Equations

It is proved that the convergence rates of generalization errors are independent of the dimension d, under the a priori assumption that the exact solutions of the PDEs lie in a suitable low-complexity space called spectral Barron space.

Sobolev Training for the Neural Network Solutions of PDEs

This paper develops a loss function that guides a neural network to reduce the error in the corresponding Sobolev space, and provides empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.

Sobolev Training for Neural Networks

Sobolev Training for neural networks is introduced, which is a method for incorporating target derivatives in addition the to target values while training, and results in models with higher accuracy and stronger generalisation on three distinct domains.

Sobolev Norm Learning Rates for Regularized Least-Squares Algorithms

This paper combines the well-known integral operator techniques with an embedding property, which results in new finite sample bounds with respect to the stronger norms in the special case of Sobolev reproducing kernel Hilbert spaces used as hypotheses spaces.

Learning from Examples as an Inverse Problem

A natural extension of analysis of Tikhonov regularization to the continuous (population) case and study the interplay between the discrete and continuous problems allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed inverse problems.

Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality

Empirically, following recent work which has shown that the deep model accuracy will improve with growing training sets according to a power law, this paper supply computational experiments to show a similar behavior of dimension dependent power law for deep PDE solvers.

Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems

Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning

This paper provides a statistical analysis to overcome issues of Laplacian regularization, and unveils a large body of spectral spectral filtering methods that exhibit desirable behaviors.

Repulsive Curves

A reformulation of gradient descent based on a Sobolev-Slobodeckij inner product enables us to make rapid progress toward local minima—independent of curve resolution, and a hierarchical multigrid scheme that significantly reduces the per-step cost of optimization.

Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces

...