# Sampling based approximation of linear functionals in reproducing kernel Hilbert spaces

@article{Santin2021SamplingBA,
title={Sampling based approximation of linear functionals in reproducing kernel Hilbert spaces},
author={Gabriele Santin and Toni Karvonen and Bernard Haasdonk},
journal={BIT Numerical Mathematics},
year={2021},
volume={62},
pages={279-310}
}
• Published 1 April 2020
• Computer Science, Mathematics
• BIT Numerical Mathematics
In this paper we analyze a greedy procedure to approximate a linear functional defined in a reproducing kernel Hilbert space by nodal values. This procedure computes a quadrature rule which can be applied to general functionals. For a large class of functionals, that includes integration functionals and other interesting cases, but does not include differentiation, we prove convergence results for the approximation by means of quasi-uniform and greedy points which generalize in various ways…
4 Citations

### A Framework and Benchmark for Deep Batch Active Learning for Regression

• Computer Science
ArXiv
• 2022
An open-source benchmark with 15 large tabular data sets is introduced, which is used to compare different BMDAL methods and shows that a combination of the novel components yields new state-of-the-art results in terms of RMSE and is computationally efficient.

### Analysis of target data-dependent greedy kernel algorithms: Convergence rates for f-, $f \cdot P$- and f/P-greedy

• Computer Science
ArXiv
• 2021
For the first time, new convergence rates for target data adaptive interpolation that are faster than the ones given by uniform points are obtained, without the need of any special assumption on the target function.

## References

SHOWING 1-10 OF 52 REFERENCES

### A Newton basis for Kernel spaces

• Computer Science, Mathematics
J. Approx. Theory
• 2009

### Greedy sparse linear approximations of functionals from nodal data

This paper shows how to select good nodes adaptively by a computationally cheap greedy method, keeping the error optimal in the above sense for each incremental step of the node selection.

### A Vectorial Kernel Orthogonal Greedy Algorithm

• Computer Science, Mathematics
• 2013
This work shows that the approximation gain is bounded globally and for the multivariate case the limit functions correspond to a directional Hermite interpolation and proves algebraic convergence similar to [14], improved by a dimension-dependent factor, and introduces a new a-posteriori error bound.

### Convergence rate of the data-independent P-greedy algorithm in kernel-based approximation

• Computer Science, Mathematics
• 2016
This convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.

### Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning

• Mathematics
• 2010
The case of infinitely smooth functions is investigated, in order to derive error estimates with exponential convergence orders.

### Mercer’s Theorem on General Domains: On the Interaction between Measures, Kernels, and RKHSs

• Mathematics
• 2012
Given a compact metric space X and a strictly positive Borel measure ν on X, Mercer’s classical theorem states that the spectral decomposition of a positive self-adjoint integral operator

### Approximate Interpolation with Applications to Selecting Smoothing Parameters

• Mathematics, Computer Science
Numerische Mathematik
• 2005
It is shown that a small error on the discrete data set leads under mild assumptions automatically to a smallerror on a larger region and is applied to spline smoothing to show that a specific, a priori choice of the smoothing parameter is possible and leads to the same approximation order as the classical interpolant.

### Parametric Integration by Magic Point Empirical Interpolation

• Mathematics
• 2015
We derive analyticity criteria for explicit error bounds and an exponential rate of convergence of the magic point empirical interpolation method introduced by Barrault et al. (2004). Furthermore, we

### Approximation and learning by greedy algorithms

• Computer Science
• 2008
This work improves on the existing theory of convergence rates for both the orthogonal greedy algorithm and the relaxed greedy algorithm, as well as for the forward stepwise projection algorithm, and proves convergence results for a variety of function classes and not simply those that are related to the convex hull of the dictionary.