• Corpus ID: 182952528

Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

@inproceedings{Vempala2019RapidCO,
  title={Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices},
  author={Santosh S. Vempala and Andre Wibisono},
  booktitle={Neural Information Processing Systems},
  year={2019}
}
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\mathbb{R}^n$. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming $\nu$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in Renyi divergence of order $q > 1$ assuming the limit of ULA satisfies either the log-Sobolev or Poincare… 

Figures from this paper

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

A convergence guarantee for PLA in Kullback-Leibler (KL) divergence when $\nu$ satisfies log-Sobolev inequality (LSI) and $f$ has bounded second and third derivatives is proved.

A Non-Asymptotic Analysis for Stein Variational Gradient Descent

A descent lemma is obtained establishing that the SVGD algorithm decreases the objective at each iteration, and provably converges, with less restrictive assumptions on the step size than required in earlier analyses.

Improved bounds for discretization of Langevin diffusions: Near-optimal rates without convexity

An improved analysis of the Euler-Maruyama discretization of the Langevin diffusion does not require global contractivity, and yields polynomial dependence on the time horizon, and simultaneously improves all those methods based on Dalayan's approach.

A Brief Note on the Convergence of Langevin Monte Carlo in Chi-Square Divergence

Under an opaque uniform warmness condition on the LMC iterates, it is established that $\widetilde{\mathcal{O}}(\epsilon^{-1})$ steps are sufficient for LMC to reach the neighborhood of the target in Chi-square divergence.

Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev

From technical point of view, it is shown that KL decreases in a geometric rate whenever the distribution $e^{-f}$ satisfies a log-Sobolev inequality on $M$.

Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials

  • D. Nguyen
  • Mathematics
    Brazilian Journal of Probability and Statistics
  • 2022
The problem of sampling through Euler discretization, where the potential function is assumed to be a mixture of weakly smooth distributions and satisfies weakly dissipative, is studied and convergence guarantees under Poincaré inequality or non-strongly convex outside the ball are proved.

Unadjusted Langevin algorithm for non-convex weakly smooth potentials

A new mixture weakly smooth condition is introduced, under which it is proved that ULA for smoothing potential will converge with additional log-Sobolev inequality, and convergence guarantees under isoperimetry, and non-strongly convex at infinity are established.

Convergence of the Riemannian Langevin Algorithm

We study the Riemannian Langevin Algorithm for the problem of sampling from a distribution with density ν with respect to the natural measure on a manifold with metric g. We assume that the target

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

This work provides the first R ´ enyi divergence convergence guarantees for LMC which allow for weak smoothness and do not require convexity or dissipativity conditions, and introduces techniques for bounding error terms under a certain change of measure, which is a new feature in R´enyi analysis.

Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

This paper provides a descent lemma establishing that the SVGD algorithm decreases the KL divergence at each iteration and proves a complexity bound for SVGD in the population limit in terms of the Stein Fisher information.
...

References

SHOWING 1-10 OF 72 REFERENCES

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

A convergence guarantee for PLA in Kullback-Leibler (KL) divergence when $\nu$ satisfies log-Sobolev inequality (LSI) and $f$ has bounded second and third derivatives is proved.

Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet Log-Sobolev

From technical point of view, it is shown that KL decreases in a geometric rate whenever the distribution $e^{-f}$ satisfies a log-Sobolev inequality on $M$.

Sharp Convergence Rates for Langevin Dynamics in the Nonconvex Setting

Both overdamped and underdamped Langevin MCMC are studied and upper bounds on the number of steps required to obtain a sample from a distribution that is within $\epsilon$ of $p*$ in $1$-Wasserstein distance are established.

Analysis of Langevin Monte Carlo via Convex Optimization

It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.

Convergence of Langevin MCMC in KL-divergence

By considering the Langevin diffusion as a gradient flow in the space of probability distributions, an elegant analysis is obtained that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis

The present work provides a nonasymptotic analysis in the context of non-convex learning problems, giving finite-time guarantees for SGLD to find approximate minimizers of both empirical and population risks.

Mirror Langevin Monte Carlo: the Case Under Isoperimetry

The result reveals intricate relationship between the underlying geometry and the target distribution and suggests that care might need to be taken for the discretized algorithm to achieve vanishing bias with diminishing stepsize for sampling from potentials under weaker smoothness/convexity regularity conditions.

Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation

We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of

Exponential ergodicity of mirror-Langevin diffusions

A class of diffusions called Newton-Langevin diffusions are proposed and it is proved that they converge to stationarity exponentially fast with a rate which not only is dimension-free, but also has no dependence on the target distribution.

Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions

We obtain several quantitative bounds on the mixing properties of the Hamiltonian Monte Carlo (HMC) algorithm for a strongly log-concave target distribution $\pi$ on $\mathbb{R}^{d}$, showing that
...