• Corpus ID: 235359268

Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning

  title={Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning},
  author={Vivien A. Cabannes and Loucas Pillaud-Vivien and Francis R. Bach and Alessandro Rudi},
As annotations of data can be scarce in large-scale practical problems, leveraging unlabelled examples is one of the most important aspects of machine learning. This is the aim of semi-supervised learning. To benefit from the access to unlabelled data, it is natural to diffuse smoothly knowledge of labelled data to unlabelled one. This induces to the use of Laplacian regularization. Yet, current implementations of Laplacian regularization suffer from several drawbacks, notably the well-known… 

Figures and Tables from this paper

Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent

An implicit acceleration of using a Sobolev norm as the objective function for training is explained, inferring that the optimal number of epochs of DRM becomes larger than the number of PINN when both the data size and the hardness of tasks increase, although both DRM and PINN can achieve statistical optimality.

A Fast Algorithm for Ranking Users by their Influence in Online Social Platforms

A novel scalable algorithm is introduced for the fast approximation of ψ -score, summarizing both structural and behavioral information for the nodes, to run as fast as PageRank, and is validated on several real-world datasets.

Active Labeling: Streaming Stochastic Gradients

After formalizing the “active labeling” problem, this paper provides a streaming technique that provably minimizes the ratio of generalization error over number of samples and illustrated in depth for robust regression.

On the Estimation of Derivatives Using Plug-in KRR Estimators

A simple plug-in kernel ridge regression (KRR) estimator in nonparametric regression with random design that is broadly applicable for multi-dimensional support and arbitrary mixed-partial derivatives and achieves the optimal rate of convergence with the same choice of tuning parameter for any order of derivatives.



A General Framework for Consistent Structured Prediction with Implicit Loss Embeddings

A large class of loss functions is identified and study that implicitly defines a suitable geometry on the problem that is the key to develop an algorithmic framework amenable to a sharp statistical analysis and yielding efficient computations.

Asymptotic behavior of ℓp-based Laplacian regularization in semi-supervised learning

A theoretical study of lp-based Laplacian regularization under a d-dimensional geometric random graph model, which shows that the effect of the underlying density vanishes monotonically with p, yielding a function estimate f̂ that is both smooth and non-degenerate, while remaining maximally sensitive to P.

Graph Laplacians and their Convergence on Random Neighborhood Graphs

This paper determines the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero and shows that for a uniform measure on the submanifold all graph LaPLacians have the same limit up to constants.

Asymptotic behavior of lp-based Laplacian regularization in semi-supervised learning

  • In Conference on Learning Theory,
  • 2016

Statistical Estimation of the Poincaré constant and Application to Sampling Multimodal Distributions

This paper shows both theoretically and experimentally that, given sufficiently many samples of a measure, it can be estimated its Poincar constant and derives an algorithm that captures a low dimensional representation of the data by finding directions which are difficult to sample.

Disambiguation of weak supervision with exponential convergence rates

This paper focuses on partial labelling, an instance of weak supervision where, from a given input, the authors are given a set of potential targets and proposes an empirical disambiguation algorithm to recover full supervision from weak supervision.

Fast rates in structured prediction

This paper illustrates it for predictors based on nearest neighbors, generalizing rates known for binary classification to any discrete problem within the framework of structured prediction, and considers kernel ridge regression, where it is shown that known rates in n−1/4 are improved to arbitrarily fast rates.

Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class

It is found that while some calibrated surrogates can indeed fail to provide H -consistency when minimized over a naturallooking but naïvely chosen scoring function class F , the situation can potentially be remedied by minimizing them over a more carefully chosen class of scoring functions F .

Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS

We prove that the reproducing kernel Hilbert spaces (RKHS) of a deep neural tangent kernel and the Laplace kernel include the same set of functions, when both kernels are restricted to the sphere