The Multiple-Try Method and Local Optimization in Metropolis Sampling

@article{Liu2000TheMM,
  title={The Multiple-Try Method and Local Optimization in Metropolis Sampling},
  author={Jun S. Liu and Faming Liang and Wing Hung Wong},
  journal={Journal of the American Statistical Association},
  year={2000},
  volume={95},
  pages={121 - 134}
}
Abstract This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sampler in continuous state-space. Numerical studies show that the new method performs significantly better than the traditional Metropolis-Hastings (M-H) sampler. With minor tailoring in using… 

An adaptive multiple-try Metropolis algorithm

TLDR
An adaptive multiple-try Metropolis algorithm designed to tackle problems of Markov chain Monte Carlo methods by combining the flexibility of multiple-proposal samplers with the user-friendliness and optimality of adaptive algorithms is proposed.

ON MULTIPLE TRY SCHEMES AND THE PARTICLE METROPOLIS-HASTINGS ALGORITHM

TLDR
In this note, similarities and differences among the MTM schemes and the PMH method are investigated.

Acceleration of the Multiple-Try Metropolis algorithm using antithetic and stratified sampling

TLDR
This work proposes a modification of the Multiple-Try Metropolis algorithm which allows for the use of correlated proposals, particularly antithetic and stratified proposals, and explores theUse of quasi Monte Carlo (QMC) methods to generate highly stratified samples.

Acceleration of the Multiple-Try Metropolis using Antithetic and Stratified Sampling

TLDR
This work proposes a modification of the Multiple-Try Metropolis algorithm which allows the use of correlated proposals, particularly antithetic and stratified proposals, and explores theUse of quasi-Monte Carlo methods to generate highly stratified samples.

Interacting multiple try algorithms with different proposal distributions

TLDR
A new class of interacting Markov chain Monte Carlo algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler and the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms.

Metropolis Sampling

TLDR
This document describes in details all the elements involved in the MH algorithm and the most relevant variants, providing a quick but exhaustive overview of the current Metropolis-based sampling’s world.

Convergence Rate of Multiple-try Metropolis Independent sampler

The Multiple-try Metropolis (MTM) method is an interesting extension of the classical Metropolis-Hastings algorithm. However, theoretical understandings of its convergence behavior as well as whether

Antithetic Acceleration of the Multiple-Try Metropolis

TLDR
A modification of the Multiple-Try Metropolis algorithm which allows the use of correlated proposals, particularly antithetic proposals, is proposed and the stratification induced by the Latin Hypercube sampling can be particularly efficient too.

Rapidly Mixing Multiple-try Metropolis Algorithms for Model Selection Problems

TLDR
It is proved that MTM can achieve a mixing time bound smaller than that of MH by a factor of the number of trials under a general setting applicable to high-dimensional model selection problems.

Adaptive Component-Wise Multiple-Try Metropolis Sampling

TLDR
A component-wise multiple-try Metropolis (CMTM) algorithm that chooses from a set of candidate moves sampled from different distributions that dynamically builds a better set of proposal distributions as the Markov chain runs.
...

References

SHOWING 1-10 OF 27 REFERENCES

Adaptive Markov Chain Monte Carlo through Regeneration

Abstract Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution π. This is done by calculating averages over the sample path of a

Performance of the Gibbs, Hit-and-Run, and Metropolis Samplers

Abstract We consider the performance of three Monte Carlo Markov-chain samplers—the Gibbs sampler, which cycles through coordinate directions; the Hit-and-Run (HR and the Metropolis sampler, which

Facilitating the Gibbs Sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler

TLDR
An importance sampling device is proposed for converting the output of the Gibbs Sampler to a sample from the exact posterior, and an approach for implementing the Gibbs sampler in nonconjugate situations is presented.

Monte Carlo Sampling Methods Using Markov Chains and Their Applications

SUMMARY A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and

Inference from Iterative Simulation Using Multiple Sequences

TLDR
The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.

Predictive updating methods with application to Bayesian classification

TLDR
A unified treatment of switching regression models driven by a general binary process is presented, and a Bayesian testing procedure is developed that can be generalized to accommodate other Bayesian-like procedures.

A cavity-biased (T, V, μ) Monte Carlo method for the computer simulation of fluids

A modified sampling technique is proposed for use in Monte Carlo calculations in the grand canonical ensemble. The new method, called the cavity-biased (T, V, μ) Monte Carlo procedure, attempts

Estimation of Finite Mixture Distributions Through Bayesian Sampling

SUMMARY A formal Bayesian analysis of a mixture model usually leads to intractable calculations, since the posterior distribution takes into account all the partitions of the sample. We present

Multigrid Monte Carlo method. Conceptual foundations.

  • GoodmanSokal
  • Computer Science
    Physical review. D, Particles and fields
  • 1989
TLDR
A stochastic generalization of the multigrid method, called multigrids Monte Carlo (MGMC), that reduces critical slowing down in Monte Carlo computations of lattice field theories, applicable to nonlinear {sigma} models, and to lattice gauge theories with or without bosonic matter fields.

Rotational insertion bias: a novel method for simulating dense phases of structured particles, with particular application to water

A novel method has been developed to bias the insertion of structured particles into dense phases during grand canonical and Gibbs ensemble Monte Carlo simulations. The method biases the orientation