• Corpus ID: 189898201

Peskun-Tierney ordering for Markov chain and process Monte Carlo: beyond the reversible scenario

  title={Peskun-Tierney ordering for Markov chain and process Monte Carlo: beyond the reversible scenario},
  author={Christophe Andrieu and Samuel Livingstone},
  journal={arXiv: Probability},
Historically time-reversibility of the transitions or processes underpinning Markov chain Monte Carlo methods (MCMC) has played a key r\^ole in their development, while the self-adjointness of associated operators together with the use of classical functional analysis techniques on Hilbert spaces have led to powerful and practically successful tools to characterize and compare their performance. Similar results for algorithms relying on nonreversible Markov processes are scarce. We show that… 
On the Convergence Time of Some Non-Reversible Markov Chain Monte Carlo Methods
It is commonly admitted that non-reversible Markov chain Monte Carlo (MCMC) algorithms usually yield more accurate MCMC estimators than their reversible counterparts. In this note, we show that in
Nonreversible Jump Algorithms for Bayesian Nested Model Selection
By lifting this model indicator variable, a nonreversible version of the popular reversible jump algorithms, this simple algorithmic modification provides samplers which can empirically outperform their reversible counterparts at no extra computational cost.
Boost your favorite Markov Chain Monte Carlo sampler using Kac's theorem: the Kick-Kac teleportation algorithm
A novel class of non-reversible Markov chains is introduced, each chain being defined on an extended state space and having an invariant probability measure admitting π as a marginal distribution.
Nonreversible MCMC from conditional invertible transforms: a complete recipe with convergence guarantees
This paper develops general tools to ensure that a class of nonreversible Markov kernels, possibly relying on complex transforms, has the desired invariance property and lead to convergent algorithms.
Subgeometric hypocoercivity for piecewise-deterministic Markov process Monte Carlo methods
We extend the hypocoercivity framework for piecewise-deterministic Markov process (PDMP) Monte Carlo established in [Andrieu et. al. (2018)] to heavy-tailed target distributions, which exhibit
An asymptotic Peskun ordering and its application to lifted samplers
A Peskun ordering between two samplers, implying a dominance of one over the other, is known among the Markov chain Monte Carlo community for being a remarkably strong result, but it is also known
Markov chain Monte Carlo algorithms with sequential proposals
Two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS), compare favorably to the NUTS.
The Boomerang Sampler
This paper introduces the Boomerang Sampler as a novel class of continuous-time non-reversible Markov chain Monte Carlo algorithms and demonstrates theoretically and empirically that it can out-perform existing benchmark piecewise deterministic Markov processes such as the bouncy particle sampler and the Zig-Zag.
MetFlow: A New Efficient Method for Bridging the Gap between Markov Chain Monte Carlo and Variational Inference
A new computationally efficient method to combine Variational Inference (VI) with Markov Chain Monte Carlo (MCMC) is proposed, which is amenable to the reparametrization trick and does not rely on computationally expensive reverse kernels.
Exact targeting of Gibbs distributions using velocity-jump processes
This work introduces and studies a new family of velocity jump Markov processes directly amenable to exact simulation with the following two properties: i) trajectories converge in law when a


Markov Chain Monte Carlo and Irreversibility
Hypocoercivity of piecewise deterministic Markov process-Monte Carlo
In this work, we establish $\mathrm{L}^2$-exponential convergence for a broad class of Piecewise Deterministic Markov Processes recently proposed in the context of Markov Process Monte Carlo methods
Lifting -- A nonreversible Markov chain Monte Carlo Algorithm
This work reviews nonreversible Markov chains, which violate detailed balance, and yet still relax to a given target stationary distribution, and provides a pseudocode implementation, review related work, and discusses the applicability of such methods.
Limit theorems for the zig-zag process
The performance of the zig-zag sampler is studied, focusing on the one-dimensional case, to identify conditions under which a central limit theorem holds and characterise the asymptotic variance.
The Bouncy Particle Sampler: A Nonreversible Rejection-Free Markov Chain Monte Carlo Method
An alternative scheme recently introduced in the physics literature where the target distribution is explored using a continuous-time nonreversible piecewise-deterministic Markov process is explored, and several computationally efficient implementations of this Markov chain Monte Carlo schemes are proposed.
We analyze the convergence to stationarity of a simple nonreversible Markov chain that serves as a model for several nonreversible Markov chain sampling methods that are used in practice. Our
Peskun ordering is a partial ordering defined on the space of transition matrices of discrete time Markov chains. If the Markov chains are reversible with respect to a common stationary distribution
Generalized and hybrid Metropolis-Hastings overdamped Langevin algorithms
It has been shown that the nonreversible overdamped Langevin dynamics enjoy better convergence properties in terms of spectral gap and asymptotic variance than the reversible one. In this article we
Improving the Convergence of Reversible Samplers
Working with the generator of Markov processes, it is proved that for some of the most commonly used performance criteria, i.e., spectral gap, asymptotic variance and large deviation functionals, sampling is improved for appropriate reversible and irreversible perturbations of some initially given reversible sampler.
Improving Asymptotic Variance of MCMC Estimators: Non-reversible Chains are Better
I show how any reversible Markov chain on a finite state space that is irreducible, and hence suitable for estimating expectations with respect to its invariant distribution, can be used to construct