• Corpus ID: 235390877

Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions

  title={Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions},
  author={Yin Tat Lee and Ruoqi Shen and Kevin Tian},
We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of Ω̃(κd) on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results [DCWY18, CDWY19, LST20a] up to logarithmic factors and answering an open… 

Figures from this paper


Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients
This work provides a non-asymptotic upper bound on the mixing time of the Metropolized HMC with explicit choices of stepsize and number of leapfrog steps, and provides a general framework for sharpening mixing time bounds Markov chains initialized at a substantial distance from the target distribution over continuous spaces.
Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm
The upper bound proof introduces a new technique based on a projection characterization of the Metropolis adjustment which reduces the study of MALA to the well-studied discretization analysis of the Langevin SDE and bypasses direct computation of the acceptance probability.
Log-concave sampling: Metropolis-Hastings algorithms are fast!
A non-asymptotic upper bound on the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) is proved, and the gains of MALA over ULA for weakly log-concave densities are demonstrated.
Dimensionally Tight Bounds for Second-Order Hamiltonian Monte Carlo
This work shows that the conjecture that Hamiltonian Monte Carlo can be run in gradient evaluations when sampling from strongly log-concave target distributions that satisfy a weak third-order regularity property associated with the input data, and suggests that leapfrog HMC performs better than its competitors when this condition is satisfied.
Theoretical guarantees for approximate sampling from smooth and log‐concave densities
Sampling from various kinds of distributions is an issue of paramount importance in statistics since it is often the key ingredient for constructing estimators, test procedures or confidence
The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
The No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L, and derives a method for adapting the step size parameter {\epsilon} on the fly based on primal-dual averaging.
High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm
For a broad class of $d$-dimensional distributions arising from generalized linear models, it is proved that the resulting third-order algorithm produces samples from a distribution that is at most $\varepsilon > 0$ in Wasserstein distance from the target distribution in $O\left(\frac{d^{1/3}}{ \varpsilon^{2/3} \right)$ steps.
Mixing of Hamiltonian Monte Carlo on strongly log-concave distributions 2: Numerical integrators
We obtain quantitative bounds on the mixing properties of the Hamiltonian Monte Carlo (HMC) algorithm with target distribution in d-dimensional Euclidean space, showing that HMC mixes quickly
The geometry of logconcave functions and sampling algorithms
Borders match previous bounds for the special case when the distribution to sample from is the uniform distribution over a convex body, with no assumptions on the local smoothness of the density function.
On sampling from a log-concave density using kinetic Langevin diffusions
It is proved that the geometric mixing property of the kinetic Langevin diffusion with a mixing rate that is, in the overdamped regime, optimal in terms of its dependence on the condition number is optimal.