• Corpus ID: 237091156

Uniform Function Estimators in Reproducing Kernel Hilbert Spaces

@inproceedings{Dommel2021UniformFE,
  title={Uniform Function Estimators in Reproducing Kernel Hilbert Spaces},
  author={Paul R. Dommel and Alois Pichler},
  year={2021}
}
This paper addresses the problem of regression to reconstruct functions, which are observed with superimposed errors at random locations. We address the problem in reproducing kernel Hilbert spaces. It is demonstrated that the estimator, which is often derived by employing Gaussian random fields, converges in the mean norm of the reproducing kernel Hilbert space to the conditional expectation and this implies local and uniform convergence of this function estimator. By preselecting the kernel… 

Figures from this paper

References

SHOWING 1-10 OF 27 REFERENCES

Optimal Rates for the Regularized Least-Squares Algorithm

A complete minimax analysis of the problem is described, showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel.

Sharp analysis of low-rank kernel matrix approximations

This paper shows that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such methods.

Divide and Conquer Kernel Ridge Regression

The main theorem establishes that despite the computational speed-up, statistical optimality is retained: if m is not too large, the partition-based estimate achieves optimal rates of convergence for the full sample size N.

Kernels, Associated Structures and Generalizations

This paper gives a survey of results in the mathematical literature on positive definite kernels and their associated structures and presents the general framework of Hilbertian subspaces of Schwartz which is used to introduce kernels which are distributions.

On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA

The differences between the two spectra are bounded and a performance bound on kernel principal component analysis (PCA) is provided showing that good performance can be expected even in very-high-dimensional feature spaces provided the sample eigenvalues fall sufficiently quickly.

A Generalized Representer Theorem

The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space.

Stochastic Analysis for Gaussian Random Processes and Fields: With Applications

Covariances and Associated Reproducing Kernel Hilbert Spaces Covariances and Negative Definite Functions Reproducing Kernel Hilbert Space Gaussian Random Fields Gaussian Random Variable Gaussian

Gradient Flows: In Metric Spaces and in the Space of Probability Measures

Notation.- Notation.- Gradient Flow in Metric Spaces.- Curves and Gradients in Metric Spaces.- Existence of Curves of Maximal Slope and their Variational Approximation.- Proofs of the Convergence

Learning Theory: An Approximation Theory Viewpoint

This paper presents a framework of learning for regularization of vector machines for classification and some examples of regularized classifiers used in this classifier study.

Introduction to Nonparametric Estimation

  • A. Tsybakov
  • Mathematics
    Springer series in statistics
  • 2009
The main idea is to introduce the fundamental concepts of the theory while maintaining the exposition suitable for a first approach in the field, and many important and useful results on optimal and adaptive estimation are provided.