• Corpus ID: 244527493

Uniform Convergence Rates for Lipschitz Learning on Graphs

@article{Bungert2021UniformCR,
  title={Uniform Convergence Rates for Lipschitz Learning on Graphs},
  author={Leon Bungert and Jeff Calder and Tim Roith},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.12370}
}
Lipschitz learning is a graph-based semi-supervised learning method where one extends labels from a labeled to an unlabeled data set by solving the infinity Laplace equation on a weighted graph. In this work we prove uniform convergence rates for solutions of the graph infinity Laplace equation as the number of vertices grows to infinity. Their continuum limits are absolutely minimizing Lipschitz extensions with respect to the geodesic metric of the domain where the graph vertices are sampled from… 

Figures from this paper

Continuum Limit of Lipschitz Learning on Graphs

A sequence of functionals are defined which approximate the largest local Lipschitz constant of a graph function and prove $\Gamma-convergence in the $L^\infty$-topology to the supremum norm of the gradient as the graph becomes denser, and compactness of the functionals which implies convergence of minimizers is shown.

Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth

It is shown that the continuum limit of the p-eikonal equation on a random geometric graph recovers a geodesic density weighted distance in the continuum, and the p→∞ limit recovers shortest path distances.

Research

impact and the team lift; (5) reputation and aging effects on the success in creative markets; and (6) established patterns of socio-demographic homophily on online social media.

Ratio convergence rates for Euclidean first-passage percolation: Applications to the graph infinity Laplacian

In this paper we prove the first quantitative convergence rates for the graph infinity Laplace equation for length scales at the connectivity threshold. In the graph-based semi-supervised learning

Large data limit of the MBO scheme for data clustering: convergence of the dynamics

. We prove that the dynamics of the MBO scheme for data clustering converge to a viscosity solution to mean curvature flow. The main ingredients are (i) a new abstract convergence result based on

References

SHOWING 1-10 OF 49 REFERENCES

Rates of Convergence for Laplacian Semi-Supervised Learning with Low Labeling Rates

In the well-posed setting, quantitative error estimates of $O(\varepsilon\beta^{-1/2})$ for the difference between the solutions of the discrete problem and continuum PDE, up to logarithmic factors are proved.

Continuum Limit of Lipschitz Learning on Graphs

A sequence of functionals are defined which approximate the largest local Lipschitz constant of a graph function and prove $\Gamma-convergence in the $L^\infty$-topology to the supremum norm of the gradient as the graph becomes denser, and compactness of the functionals which implies convergence of minimizers is shown.

Lipschitz regularity of graph Laplacians on random data clouds

This paper proves high probability interior and global Lipschitz estimates for solutions of graph Poisson equations, and obtains high probability and approximate convergence rates for the convergence of graph Laplacian eigenvectors towards eigenfunctions of the corresponding weighted Laplace-Beltrami operators.

A continuum limit for the PageRank algorithm

A new framework for rigorously studying continuum limits of learning algorithms on directed graphs is proposed and it is proved that the numerical scheme is consistent and stable and compute explicit rates of convergence of the discrete solution to the solution of the continuum limit PDE.

Asymptotic behavior of ℓp-based Laplacian regularization in semi-supervised learning

A theoretical study of lp-based Laplacian regularization under a d-dimensional geometric random graph model, which shows that the effect of the underlying density vanishes monotonically with p, yielding a function estimate f̂ that is both smooth and non-degenerate, while remaining maximally sensitive to P.

Algorithms for Lipschitz Learning on Graphs

This work develops fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices using the absolutely minimal Lipschitz extension.

Improved spectral convergence rates for graph Laplacians on ε-graphs and k-NN graphs

Properly-Weighted Graph Laplacian for Semi-supervised Learning

A way to correctly set the weights in Laplacian regularization so that the estimator remains well posed and stable in the large-sample limit and it is proved that the semi-supervised learning algorithm converges to the smooth solution of a continuum variational problem that attains the labeled values continuously.

Improved spectral convergence rates for graph Laplacians on epsilon-graphs and k-NN graphs

The results show that the eigenvalues and eigenvectors of the graph Laplacian converge to those of the Laplace-Beltrami operator at a rate of $O(n^{-1/(m+4)})$, up to log factors, where m is the manifold dimension and $n$ is the number of vertices in the graph.

Analysis of $p$-Laplacian Regularization in Semi-Supervised Learning

A new model is introduced which is as simple as the original model, but overcomes this restriction, and it is proved that the minimizers of the discrete functionals in random setting converge uniformly to the desired continuum limit.