Learn More
A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess(More)
In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of(More)
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. a b s t r a c t Within the framework of statistical learning theory we(More)
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same(More)
Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu-larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning(More)
We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst case analysis and data-independent choice of the parameter. For regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on few known constants(More)
In this paper we show that a large class of regularization methods designed for solving ill-posed inverse problems gives rise to novel learning algorithms. All these algorithms are consistent kernel methods which can be easily implemented. The intuition behind our approach is that, by looking at regularization from a filter function perspective, filtering(More)
The regularization parameter choice is a fundamental problem in Learning Theory since the performance of most supervised algorithms crucially depends on the choice of one or more of such parameters. In particular a main theoretical issue regards the amount of prior knowledge needed to choose the regularization parameter in order to obtain good learning(More)