• Publications
  • Influence
Learning with Local and Global Consistency
We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervisedExpand
  • 3,403
  • 531
Stability and Generalization
We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods weExpand
  • 1,148
  • 189
Measuring Statistical Dependence with Hilbert-Schmidt Norms
We propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt normExpand
  • 859
  • 185
Choosing Multiple Parameters for Support Vector Machines
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error ofExpand
  • 2,122
  • 131
The Tradeoffs of Large Scale Learning
This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case ofExpand
  • 1,254
  • 96
Local Rademacher complexities
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empiricalExpand
  • 525
  • 86
Ranking on Data Manifolds
The Google search engine has enjoyed huge success with its web page ranking algorithm, which exploits global, rather than local, hyperlink structure of the web using random walks. Here we propose aExpand
  • 687
  • 77
Wasserstein Auto-Encoders
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the modelExpand
  • 322
  • 70
Are GANs Created Equal? A Large-Scale Study
Generative adversarial networks (GAN) are a powerful subclass of generative models. Despite a very rich research activity leading to numerous interesting GAN algorithms, it is still very hard toExpand
  • 411
  • 58
A Bennett concentration inequality and its application to suprema of empirical processes
We introduce new concentration inequalities for functions on product spaces They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions.Expand
  • 248
  • 56