# Sampling based approximation of linear functionals in reproducing kernel Hilbert spaces

@article{Santin2021SamplingBA, title={Sampling based approximation of linear functionals in reproducing kernel Hilbert spaces}, author={Gabriele Santin and Toni Karvonen and Bernard Haasdonk}, journal={BIT Numerical Mathematics}, year={2021}, volume={62}, pages={279-310} }

In this paper we analyze a greedy procedure to approximate a linear functional defined in a reproducing kernel Hilbert space by nodal values. This procedure computes a quadrature rule which can be applied to general functionals. For a large class of functionals, that includes integration functionals and other interesting cases, but does not include differentiation, we prove convergence results for the approximation by means of quasi-uniform and greedy points which generalize in various ways…

## 4 Citations

### A Framework and Benchmark for Deep Batch Active Learning for Regression

- Computer ScienceArXiv
- 2022

An open-source benchmark with 15 large tabular data sets is introduced, which is used to compare different BMDAL methods and shows that a combination of the novel components yields new state-of-the-art results in terms of RMSE and is computationally efficient.

### Arbitrary multi-resolution multi-wavelet-based polynomial chaos expansion for data-driven uncertainty quantification

- Computer ScienceReliab. Eng. Syst. Saf.
- 2022

### pyNIROM - A suite of python modules for non-intrusive reduced order modeling of time-dependent problems

- MathematicsSoftw. Impacts
- 2021

### Analysis of target data-dependent greedy kernel algorithms: Convergence rates for f-, $f \cdot P$- and f/P-greedy

- Computer ScienceArXiv
- 2021

For the first time, new convergence rates for target data adaptive interpolation that are faster than the ones given by uniform points are obtained, without the need of any special assumption on the target function.

## References

SHOWING 1-10 OF 52 REFERENCES

### Greedy sparse linear approximations of functionals from nodal data

- Computer ScienceNumerical Algorithms
- 2013

This paper shows how to select good nodes adaptively by a computationally cheap greedy method, keeping the error optimal in the above sense for each incremental step of the node selection.

### A Vectorial Kernel Orthogonal Greedy Algorithm

- Computer Science, Mathematics
- 2013

This work shows that the approximation gain is bounded globally and for the multivariate case the limit functions correspond to a directional Hermite interpolation and proves algebraic convergence similar to [14], improved by a dimension-dependent factor, and introduces a new a-posteriori error bound.

### Convergence rate of the data-independent P-greedy algorithm in kernel-based approximation

- Computer Science, Mathematics
- 2016

This convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.

### A novel class of stabilized greedy kernel approximation algorithms: Convergence, stability & uniform point distribution

- Computer Science, MathematicsJ. Approx. Theory
- 2021

### Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning

- MathematicsAdv. Comput. Math.
- 2010

The case of infinitely smooth functions is investigated, in order to derive error estimates with exponential convergence orders.

### Mercer’s Theorem on General Domains: On the Interaction between Measures, Kernels, and RKHSs

- Mathematics
- 2012

Given a compact metric space X and a strictly positive Borel measure ν on X, Mercer’s classical theorem states that the spectral decomposition of a positive self-adjoint integral operator…

### Approximate Interpolation with Applications to Selecting Smoothing Parameters

- Mathematics, Computer ScienceNumerische Mathematik
- 2005

It is shown that a small error on the discrete data set leads under mild assumptions automatically to a smallerror on a larger region and is applied to spline smoothing to show that a specific, a priori choice of the smoothing parameter is possible and leads to the same approximation order as the classical interpolant.

### Parametric Integration by Magic Point Empirical Interpolation

- Mathematics
- 2015

We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we…

### Approximation and learning by greedy algorithms

- Computer Science
- 2008

This work improves on the existing theory of convergence rates for both the orthogonal greedy algorithm and the relaxed greedy algorithm, as well as for the forward stepwise projection algorithm, and proves convergence results for a variety of function classes and not simply those that are related to the convex hull of the dictionary.