#### Filter Results:

#### Publication Year

2003

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Key Phrases

Learn More

- Lorenzo Rosasco, Mikhail Belkin, Ernesto De Vito
- Journal of Machine Learning Research
- 2010

A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess… (More)

- Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, Alessandro Verri
- Neural Computation
- 2004

In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of… (More)

- Andrea Caponnetto, Ernesto De Vito
- Foundations of Computational Mathematics
- 2007

- Christine De Mol, Ernesto De Vito, Lorenzo Rosasco
- J. Complexity
- 2009

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. a b s t r a c t Within the framework of statistical learning theory we… (More)

- L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, Alessandro Verri
- Neural Computation
- 2008

We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same… (More)

- Ernesto De Vito, Lorenzo Rosasco, Andrea Caponnetto, Umberto De Giovannini, Francesca Odone
- Journal of Machine Learning Research
- 2005

Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu-larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning… (More)

- Ernesto De Vito, Andrea Caponnetto, Lorenzo Rosasco
- Foundations of Computational Mathematics
- 2005

We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst case analysis and data-independent choice of the parameter. For regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on few known constants… (More)

© 2 0 0 5 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l. m i t. e d u massachusetts institute of technology — computer science and artificial intelligence laboratory Abstract We develop a theoretical analysis of generalization performances of regularized least-squares on… (More)

- Ernesto De Vito, Lorenzo Rosasco, Alessandro Verri, Lorenzo
- 2006

In this paper we show that a large class of regularization methods designed for solving ill-posed inverse problems gives rise to novel learning algorithms. All these algorithms are consistent kernel methods which can be easily implemented. The intuition behind our approach is that, by looking at regularization from a filter function perspective, filtering… (More)

- Ernesto De Vito, Lorenzo Rosasco, Andrea Caponnetto, Michele Piana, Alessandro Verri
- Journal of Machine Learning Research
- 2004

In regularized kernel methods, the solution of a learning problem is found by minimizing func-tionals consisting of the sum of a data and a complexity term. In this paper we investigate some properties of a more general form of the above functionals in which the data term corresponds to the expected risk. First, we prove a quantitative version of the… (More)