#### Filter Results:

- Full text PDF available (28)

#### Publication Year

1966

2017

- This year (1)
- Last 5 years (6)
- Last 10 years (17)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Andrea Caponnetto, Ernesto De Vito
- Foundations of Computational Mathematics
- 2007

- Lorenzo Rosasco, Mikhail Belkin, Ernesto De Vito
- Journal of Machine Learning Research
- 2010

A large number of learning algorithms, for example, spectral clustering, kernel Principal Components Analysis and many manifold methods are based on estimating eigenvalues and eigenfunctions of operators defined by a similarity function or a kernel, given empirical data. Thus for the analysis of algorithms, it is an important problem to be able to assess… (More)

- C. Carmeli, E. De Vito, A. Toigo
- 2008

We characterize the reproducing kernel Hilbert spaces whose elements are p-integrable functions in terms of the boundedness of the integral operator whose kernel is the reproducing kernel. Moreover, for p = 2 we show that the spectral decomposition of this integral operator gives a complete description of the reproducing kernel.

- Lorenzo Rosasco, Ernesto De Vito, Andrea Caponnetto, Michele Piana, Alessandro Verri
- Neural Computation
- 2004

In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of… (More)

- Christine De Mol, Ernesto De Vito, Lorenzo Rosasco
- J. Complexity
- 2009

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. a b s t r a c t Within the framework of statistical learning theory we… (More)

- L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, Alessandro Verri
- Neural Computation
- 2008

We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same… (More)

- Ernesto De Vito, Lorenzo Rosasco, Andrea Caponnetto, Umberto De Giovannini, Francesca Odone
- Journal of Machine Learning Research
- 2005

Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu-larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning… (More)

- Ernesto De Vito, Lorenzo Rosasco, Alessandro Toigo
- NIPS
- 2010

In this paper we consider the problem of learning from data the support of a probability distribution when the distribution does not have a density (with respect to some reference measure). We propose a new class of regularized spectral esti-mators based on a new notion of reproducing kernel Hilbert space, which we call " completely regular ". Completely… (More)

- Ernesto De Vito, Andrea Caponnetto, Lorenzo Rosasco
- Foundations of Computational Mathematics
- 2005

We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst case analysis and data-independent choice of the parameter. For regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on few known constants… (More)

In this paper we show that a large class of regularization methods designed for solving ill-posed inverse problems gives rise to novel learning algorithms. All these algorithms are consistent kernel methods which can be easily implemented. The intuition behind our approach is that, by looking at regularization from a filter function perspective, filtering… (More)