Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms

  title={Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms},
  author={Radu V. Craiu and Lawrence F. Gray and Krzysztof G. Latuszy'nski and Neal Madras and Gareth O. Roberts and Jeffrey S. Rosenthal},
  journal={Annals of Applied Probability},
We consider whether ergodic Markov chains with bounded step size remain bounded in probability when their transitions are modied by an adversary on a bounded subset. We provide counterexamples to show that the answer is no in general, and prove theorems to show that the answer is yes under various additional assumptions. We then use our results to prove convergence of various adaptive Markov chain Monte Carlo algorithms. 

Figures from this paper

Ergodicity of Combocontinuous Adaptive MCMC Algorithms
This paper proves convergence to stationarity of certain adaptive MCMC algorithms, under certain assumptions including easily-verifiable upper and lower bounds on the transition densities and a
Air Markov Chain Monte Carlo
It is argued that many of the known Adaptive MCMC algorithms may be transformed into the corresponding Air versions, and empirical evidence that performance of the Air version stays virtually the same is provided.
An adaptive multiple-try Metropolis algorithm
An adaptive multiple-try Metropolis algorithm designed to tackle problems of Markov chain Monte Carlo methods by combining the flexibility of multiple-proposal samplers with the user-friendliness and optimality of adaptive algorithms is proposed.
Convergence and Efficiency of Adaptive MCMC
An algorithm which automatically stops adapting once it determines further adaption will not increase the convergence speed is presented, thus improving on the previous ergodicity results of Craiu et al. (2015).
Sampling by Divergence Minimization
A Markov Chain Monte Carlo method designed to sample from target distributions with irregular geometry using an adaptive scheme that rapidly adapts to the geometry of the target’s current position as it explores the surrounding space without the need for many preex-isting samples is introduced.
Adaptive schemes for piecewise deterministic Monte Carlo algorithms
An adaptive scheme that iteratively learns all or part of the covariance matrix of the target and takes advantage of the obtained information to modify the underlying process with the aim of increasing the speed of convergence is proposed.
A framework for adaptive MCMC targeting multimodal distributions
A new Monte Carlo method for sampling from multimodal distributions based on splitting the task into two: finding the modes of a target distribution and sampling, given the knowledge of the locations of the modes is proposed.
MCMC adaptatifs à essais multiples
The MCMC methods, along with their adaptive and multiple-try extensions, are thoroughly explored in order to firmly anchor the study of the proposed adaptive Multiple-Try Metropolis (aMTM) algorithm.
Adaptive Component-Wise Multiple-Try Metropolis Sampling
A component-wise multiple-try Metropolis (CMTM) algorithm that chooses from a set of candidate moves sampled from different distributions that dynamically builds a better set of proposal distributions as the Markov chain runs.
Adaptive, Delayed-Acceptance MCMC for Targets With Expensive Likelihoods
The resulting adaptive, delayed-acceptance [pseudo-marginal] Metropolis–Hastings algorithm is justified both theoretically and empirically and applied to a discretely observed Markov jump process characterizing predator–prey interactions and an ODE system describing the dynamics of an autoregulatory gene network.


Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms
We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of
On the containment condition for adaptive Markov chain Monte Carlo algorithms
This paper derives various sufficient conditions to ensure Containment, and connects the convergence rates of algorithms with the tail properties of the corresponding target distributions, and presents a Summable Adaptive Condition which, when satisfied, proves ergodicity more easily.
On the ergodicity properties of some adaptive MCMC algorithms
It is proved that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem.
The Containment Condition and Adapfail Algorithms
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify theMarkov chain update probabilities on the fly, and shows that if the containment condition is not satisfied, then the algorithm will perform very poorly.
Limit theorems for some adaptive MCMC algorithms with subgeometric kernels
It is shown that a diminishing adaptation assumption together with a drift condition for positive recurrence is enough to imply ergodicity, and strengthening the drift condition to a polynomial drift condition yields a strong law of large numbers for possibly unbounded functions.
On adaptive Markov chain Monte Carlo algorithms
Under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution is shown, which is used to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter o is sequentially adapted using a Robbins Monro type algorithm in order to find the optimal scale parameter aopt.
General state space Markov chains and MCMC algorithms
This paper surveys various results about Markov chains on gen- eral (non-countable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the
On the ergodicity of the adaptive Metropolis algorithm on unbounded domains
This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive Metropolis (AM) algorithm of Haario, Saksman, and Tamminen (9), for target distributions with a non-compact
Adversarial queuing theory
An adversarial theory of queuing is developed aimed at addressing some of the restrictions inherent in probabilistic analysis and queuing theory based on time-invariant stochastic generation.