Quantitative bounds for Markov chain convergence: Wasserstein and total variation distances

  title={Quantitative bounds for Markov chain convergence: Wasserstein and total variation distances},
  author={Neal Madras and Deniz Sezer},
We present a framework for obtaining explicit bounds on the rate of convergence to equilibrium of a Markov chain on a general state space, with respect to both total variation and Wasserstein distances. For Wasserstein bounds, our main tool is Steinsaltz's convergence theorem for locally contractive random dynamical systems. We describe practical methods for finding Steinsaltz's "drift functions" that prove local contractivity. We then use the idea of "one-shot coupling" to derive criteria that… 

Tables from this paper

Perturbation theory for Markov chains via Wasserstein distance
This work proves powerful and flexible bounds on the distance of the $n$th step distributions of two Markov chains when one of them satisfies a Wasserstein ergodicity condition, and provides estimates for geometrically ergodic Markov Chains under weak assumptions.
Subgeometric rates of convergence in Wasserstein distance for Markov chains
In this paper, we provide sufficient conditions for the existence of the invariant distribution and for subgeometric rates of convergence in Wasserstein distance for general state-space Markov chains
Central limit theorems for Markov chains based on their convergence rates in Wasserstein distance
This work provides two CLTs that directly depend on (sub-geometric) convergence rates in Wasserstein distance that hold for Lipschitz functions under certain moment conditions and applies these CLTs to four sets of Markov chain examples.
Geometric convergence bounds for Markov chains in Wasserstein distance based on generalized drift and contraction conditions
  • Qian Qin, J. Hobert
  • Mathematics
    Annales de l'Institut Henri Poincaré, Probabilités et Statistiques
  • 2022
Let $\{X_n\}_{n=0}^\infty$ denote an ergodic Markov chain on a general state space that has stationary distribution $\pi$. This article concerns upper bounds on the $L_1$-Wasserstein distance between
Perturbation theory and uniform ergodicity for discrete-time Markov chains
We study perturbation theory and uniform ergodicity for discrete-time Markov chains on general state spaces in terms of the uniform moments of the first hitting times on some set. The methods we
Optimal transportation and stationary measures for iterated function systems
Abstract In this paper we show how ideas, methods and results from optimal transportation can be used to study various aspects of the stationary measures of Iterated Function Systems equipped with a
Convergence rate bounds for iterative random functions using one-shot coupling
One-shot coupling is a method of bounding the convergence rate between two copies of a Markov chain in total variation distance. The method is divided into two parts: the contraction phase, when the
Convergence rates for a hierarchical Gibbs sampler
We establish results for the rate of convergence in total variation of a particular Gibbs sampler to its equilibrium distribution. This sampler is for a Bayesian inference model for a gamma random
Mixing Time Guarantees for Unadjusted Hamiltonian Monte Carlo
Abstract: We provide quantitative upper bounds on the total variation mixing time of the Markov chain corresponding to the unadjusted Hamiltonian Monte Carlo (uHMC) algorithm. For two general classes
Computable upper bounds on the distance to stationarity for Jovanovski and Madras's Gibbs sampler
An upper bound on the Wasserstein distance to stationarity is developed for a class of Markov chains on R.2 using a Gibbs sampler Markov chain introduced and analyzed by Jovanovski and Madras (2014).


Convergence in the Wasserstein Metric for Markov Chain Monte Carlo Algorithms with Applications to Image Restoration
This paper shows how the time for convergence to stationarity of a Markov chain can be assessed using the Wasserstein metric, rather than the usual choice of total variation distance, to get a precise O(N log N) bound on the convergence time of the stochastic Ising model.
Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo
Abstract General methods are provided for analyzing the convergence of discrete-time, general state-space Markov chains, such as those used in stochastic simulation algorithms including the Gibbs
General state space Markov chains and MCMC algorithms
This paper surveys various results about Markov chains on gen- eral (non-countable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the
Geometric Ergodicity and Hybrid Markov Chains
Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a
Geometric Ergodicity of Gibbs and Block Gibbs Samplers for a Hierarchical Random Effects Model
We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is
Perfect Simulation for Image Restoration
The coupling method has been an enormously useful tool for studying the mixing time of Markov chains and as the basis of perfect sampling algorithms such as Coupling From the Past. Several methods
Mathematical Foundations of the Markov Chain Monte Carlo Method
The Markov chain Monte Carlo method, which exploits the idea that information about a set of combinatorial objects may be obtained by performing an appropriately defined random walk, is used for estimating various quantities of physical interest.
Explicit stationary distributions for compositions of random functions and products of random matrices
If (Yn)n=1/∞ is a sequence of i.i.d. random variables onE=(0,+∞) and iff is positive onE, this paper studies explicit examples of stationary distributions for the Markov chain (Wn)n=0∞ defined
From Markov Chains to Non-Equilibrium Particle Systems
This volume presents a representative work of Chinese probabilists on probability theory and its applications in physics. Interesting results of jump Markov processes are discussed, as well as Markov
Markov Chains and Stochastic Stability
This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.