• Corpus ID: 218718580

Exponential ergodicity of mirror-Langevin diffusions

@article{Chewi2020ExponentialEO,
  title={Exponential ergodicity of mirror-Langevin diffusions},
  author={Sinho Chewi and Thibaut Le Gouic and Chen Lu and Tyler Maunu and Philippe Rigollet and Austin Stromme},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.09669}
}
Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020). As a special case of this framework, we propose a class of diffusions called Newton-Langevin diffusions and prove that they converge to stationarity exponentially fast with a rate which not only is dimension-free, but also has no dependence on the target distribution. We give an application of… 

Figures from this paper

Efficient constrained sampling via the mirror-Langevin algorithm

TLDR
A new discretization of the mirror-Langevin diffusion is proposed and a crisp proof of its convergence is given, which requires much weaker assumptions on the mirror map and the target distribution, and has vanishing bias as the step size tends to zero.

Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets

TLDR
An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established and a new nonasymptotic guarantee of convergence of the penalized gradient flow for the optimization problem is inferred.

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

TLDR
This work provides the first R ´ enyi divergence convergence guarantees for LMC which allow for weak smoothness and do not require convexity or dissipativity conditions, and introduces techniques for bounding error terms under a certain change of measure, which is a new feature in R´enyi analysis.

Convergence of Langevin Monte Carlo in Chi-Squared and Rényi Divergence

TLDR
This frame-work covers a range of non-convex potentials that are first-order smooth and exhibit strong convexity outside of a compact region and recovers the state-of-the-art rates in KL divergence, total variation and 2-Wasserstein distance in the same setup.

Mirror Langevin Monte Carlo: the Case Under Isoperimetry

TLDR
The result reveals intricate relationship between the underlying geometry and the target distribution and suggests that care might need to be taken for the discretized algorithm to achieve vanishing bias with diminishing stepsize for sampling from potentials under weaker smoothness/convexity regularity conditions.

Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

TLDR
This paper provides a descent lemma establishing that the SVGD algorithm decreases the KL divergence at each iteration and proves a complexity bound for SVGD in the population limit in terms of the Stein Fisher information.

SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence

TLDR
A new perspective on SVGD is introduced that views SVGD as the (kernelized) gradient flow of the chi-squared divergence which exhibits a strong form of uniform exponential ergodicity under conditions as weak as a Poincare inequality and proposes an alternative to SVGD, called Laplacian Adjusted Wasserstein Gradient Descent (LAWGD), that can be implemented from the spectral decomposition of the Laplacan operator associated with the target density.

Sampling with Mirrored Stein Operators

TLDR
A new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries is introduced and it is demonstrated that they yield accurate approximations to distributions on the simplex, deliver valid confidence intervals in post-selection inference, and converge more rapidly than prior methods in large-scale unconstrained posterior inference.

Accelerated Diffusion-Based Sampling by the Non-Reversible Dynamics with Skew-Symmetric Matrices

TLDR
This study theoretically and numerically clarify issues that are important to practitioners, including the selection criteria for skew-symmetric matrices, quantitative evaluations of acceleration, and the large memory cost of storing skew matrices by analyzing acceleration focusing on how the skew-Symmetric matrix perturbs the Hessian matrix of potential functions.

Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

TLDR
A convergence guarantee in Kullback-Leibler (KL) divergence is proved assuming $\nu$ satisfies a log-Sobolev inequality and the Hessian of $f$ is bounded.

References

SHOWING 1-10 OF 112 REFERENCES

Wasserstein Control of Mirror Langevin Monte Carlo

TLDR
A non-asymptotic upper-bound is established on the sampling error of the resulting Hessian Riemannian Langevin Monte Carlo algorithm that can cope with a wide variety of Hessian metrics related to highly non-flat geometries.

On sampling from a log-concave density using kinetic Langevin diffusions

TLDR
It is proved that the geometric mixing property of the kinetic Langevin diffusion with a mixing rate that is, in the overdamped regime, optimal in terms of its dependence on the condition number is optimal.

Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis

TLDR
The present work provides a nonasymptotic analysis in the context of non-convex learning problems, giving finite-time guarantees for SGLD to find approximate minimizers of both empirical and population risks.

Analysis of Langevin Monte Carlo via Convex Optimization

TLDR
It is shown that the Unadjusted Langevin Algorithm can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order $2$ and a non-asymptotic analysis of this method to sample from logconcave smooth target distribution is given.

Mirrored Langevin Dynamics

TLDR
It is proved that discretized A-MLD implies the existence of a first-order sampling algorithm that sharpens the state-of-the-art $\tilde{O}(\epsilon^{-4}d^6)$ rate for first- order sampling of {Dirichlet posteriors}.

Improved bounds for discretization of Langevin diffusions: Near-optimal rates without convexity

TLDR
An improved analysis of the Euler-Maruyama discretization of the Langevin diffusion does not require global contractivity, and yields polynomial dependence on the time horizon, and simultaneously improves all those methods based on Dalayan's approach.

Non-asymptotic convergence analysis for the Unadjusted Langevin Algorithm

TLDR
For both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance are obtained.

Quantitative bounds of convergence for geometrically ergodic Markov chain in the Wasserstein distance with application to the Metropolis Adjusted Langevin Algorithm

TLDR
The proposed rate of convergence leads to useful insights for the analysis of MCMC algorithms, and suggests ways to construct sampler with good mixing rate even if the dimension of the underlying sampling space is large.

Proximal Langevin Algorithm: Rapid Convergence Under Isoperimetry

TLDR
A convergence guarantee for PLA in Kullback-Leibler (KL) divergence when $\nu$ satisfies log-Sobolev inequality (LSI) and $f$ has bounded second and third derivatives is proved.
...