• Publications
  • Influence
Holographic Embeddings of Knowledge Graphs
TLDR
We propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. Expand
Kernels for Vector-Valued Functions: a Review
TLDR
In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods. Expand
On regularization algorithms in learning theory
TLDR
We show that a notion of regularization defined according to what is usually done for ill-posed inverse problems allows to derive learning algorithms which are consistent and provide a fast convergence rate. Expand
Less is More: Nyström Computational Regularization
TLDR
We study Nystrom type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. Expand
On Early Stopping in Gradient Descent Learning
AbstractIn this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by aExpand
Generalization Properties of Learning with Random Features
TLDR
We study the generalization properties of ridge regression with random features in the statistical learning framework. Expand
On Learning with Integral Operators
TLDR
We use a technique based on concentration inequality for Hilbert spaces to provide new much simplified proofs for a number of results in spectral approximation. Expand
Manifold Regularization
In this lecture we introduce a class of learning algorithms, collectively called manifold regularization algorithms, suited for predicting/classifying data embedded in high-dimensional spaces. WeExpand
Nonparametric sparsity and regularization
TLDR
In this paper, we propose a new approach based on the idea that the importance of a variable, while learning a non-linear functional relation, can be captured by the corresponding partial derivative. Expand
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
TLDR
The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. Expand
...
1
2
3
4
5
...