Share This Author
Holographic Embeddings of Knowledge Graphs
Holographic embeddings (HolE) are proposed to learn compositional vector space representations of entire knowledge graphs to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.
Kernels for Vector-Valued Functions: a Review
- Mauricio A Álvarez, L. Rosasco, Neil D. Lawrence
- Computer ScienceFound. Trends Mach. Learn.
- 30 June 2011
This monograph reviews different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
Less is More: Nyström Computational Regularization
A simple incremental variant of Nystrom Kernel Regularized Least Squares is suggested, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations.
On regularization algorithms in learning theory
On Learning with Integral Operators
A technique based on a concentration inequality for Hilbert spaces is used to provide new much simplified proofs for a number of results in spectral approximation of graph Laplacian operator extending and strengthening results from von Luxburg et al. (2008).
Generalization Properties of Learning with Random Features
The results shed light on the statistical computational trade-offs in large scale kernelized learning, showing the potential effectiveness of random features in reducing the computational complexity while keeping optimal generalization properties.
On Early Stopping in Gradient Descent Learning
A family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs) is studied, the family being characterized by a polynomial decreasing rate of step sizes (or learning rate).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
- T. Poggio, H. Mhaskar, L. Rosasco, B. Miranda, Q. Liao
- Computer ScienceInt. J. Autom. Comput.
- 2 November 2016
An emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning are reviewed, together with new results, open problems and conjectures.
This lecture introduces manifold regularization in the framework of semi-supervised learning, a generalization of the supervised learning setting in which the training set may consist of unlabeled as well as labeled examples.
Nonparametric sparsity and regularization
- L. Rosasco, S. Villa, S. Mosci, M. Santoro, A. Verri
- Computer ScienceJ. Mach. Learn. Res.
- 13 August 2012
This work proposes a new notion of nonparametric sparsity and a corresponding least squares regularization scheme and shows that the proposed learning algorithm corresponds to a minimization problem which can be provably solved by an iterative procedure.