In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.Expand

We show that a notion of regularization defined according to what is usually done for ill-posed inverse problems allows to derive learning algorithms which are consistent and provide a fast convergence rate.Expand

We study Nystrom type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered.Expand

AbstractIn this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by a… Expand

We use a technique based on concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation.Expand

In this lecture we introduce a class of learning algorithms, collectively called manifold regularization algorithms, suited for predicting/classifying data embedded in high-dimensional spaces. We… Expand

In this paper, we propose a new approach based on the idea that the importance of a variable, while learning a non-linear functional relation, can be captured by the corresponding partial derivative.Expand

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning.Expand