• Corpus ID: 115855993

Ordering, Slicing And Splitting Monte Carlo Markov Chains

  title={Ordering, Slicing And Splitting Monte Carlo Markov Chains},
  author={Antonietta Mira},
  • A. Mira
  • Published 1998
  • Mathematics, Computer Science
Markov chain Monte Carlo is a method of approximating the integral of a function f with respect to a distribution . A Markov chain that has as its stationary distribution is simulated producing samplesX1; X2; : : : . The integral is approximated by taking the average of f(Xn) over the sample path. The standard way to construct such Markov chains is the Metropolis-Hastings algorithm. The class P of all Markov chains having as their unique stationary distribution is very large, so it is important… 

Figures from this paper

On the use of auxiliary variables inMarkov chain

We study the slice sampler, a method of constructing a reversible Markov chain with a speciied invariant distribution. Given an independence Metropolis-Hastings algorithm it is always possible to

Slice Sampling

Markov chain sampling methods that adapt to characteristics of the distribution being sampled can be constructed using the principle that one can ample from a distribution by sampling uniformly from

Efficiency and Convergence Properties of Slice Samplers

The slice sampler (SS) is a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence Metropolis–Hastings algorithm (IMHA) it is always possible

Limit theorems for sequential MCMC methods

An $\mathbb{L}_r$ -inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods are established and conditions under which errors can be controlled uniformly in time are provided.


Markov chain Monte Calro methods (MCMC) are commonly used in Bayesian statistics. In the last twenty years, many results have been established for the calculation of the exact convergence rate of

On extended state-space constructions for Monte Carlo methods

A generic importance-sampling framework is described which admits virtually all Monte Carlo methods, including smc and mcmc methods, as special cases and hierarchical combinations of different Monte Carlo schemes can be justified as repeated applications of this framework.


Acknowledgments. This work is part of my doctoral research done under the direction of Jeerey S. Rosenthal. I thank Peter Rosenthal for helpful discussions about the operator theory issues. Abstract.

Towards Automatic Reversible Jump Markov Chain Monte Carlo

The automatic sampler that is introduced in the penultimate chapter of the thesis builds upon the first steps taken by Green (2003) and uses adaptive techniques to perform self-tuning and calibration for many trans-dimensional statistical problems.

Delayed Rejection in Reversible

In a Metropolis-Hastings algorithm, rejection of proposed moves is an intrinsic part of ensuring that the chain converges to the intended target distribution. However, persistent rejection, perhaps

Delayed rejection Hamiltonian Monte Carlo for sampling multiscale distributions

A delayed rejection variant of Hamiltonian Monte Carlo that makes one or more subsequent proposals each using a step size geometrically smaller than the last if an initial HMC trajectory is rejected, providing increased robustness to step size misspecification.



Optimum Monte-Carlo sampling using Markov chains

SUMMARY The sampling method proposed by Metropolis et al. (1953) requires the simulation of a Markov chain with a specified 7i as its stationary distribution. Hastings (1970) outlined a general

Exact sampling with coupled Markov chains and applications to statistical mechanics

This work describes a simple variant of this method that determines on its own when to stop and that outputs samples in exact accordance with the desired distribution, and uses couplings which have also played a role in other sampling schemes.

Practical Markov Chain Monte Carlo

The case is made for basing all inference on one long run of the Markov chain and estimating the Monte Carlo error by standard nonparametric methods well-known in the time-series and operations research literature.

Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation

An overrelaxed Markov chain Monte Carlo algorithm based on order statistics that can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution function can be efficiently computed.

An interruptible algorithm for perfect sampling via Markov chains

A new algorithm is presented which again uses the same Markov chains to produce perfect samples from n, but is baaed on a different idea (namely, acceptance/rejection sampling); and eliminates user-impatience bias.

Markov Chains for Exploring Posterior Distributions

Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are


We analyze the convergence to stationarity of a simple nonreversible Markov chain that serves as a model for several nonreversible Markov chain sampling methods that are used in practice. Our

Metropolized independent sampling with comparisons to rejection sampling and importance sampling

In this paper, a special Metropolis-Hastings algorithm, Metropolized independent sampling, proposed first in Hastings (1970), is studied in full detail and shown to be superior to rejection sampling in two respects: asymptotic efficiency and ease of computation.

Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications

Two applications in Bayesian image analysis are considered: a binary classification problem in which partial decoupling out performs Swendsen-Wang and single-site Metropolis methods, and a positron emission tomography reconstruction that uses the gray level prior of Geman and McClure.

Rates of convergence of the Hastings and Metropolis algorithms

Recent results in Markov chain theory are applied to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and it is shown geometric convergence essentially occurs if and only if $pi$ has geometric tails.