• Corpus ID: 220128263

On the convergence of the Metropolis algorithm with fixed-order updates for multivariate binary probability distributions

@article{Brgge2021OnTC,
  title={On the convergence of the Metropolis algorithm with fixed-order updates for multivariate binary probability distributions},
  author={Kai Br{\"u}gge and Asja Fischer and C. Igel},
  journal={ArXiv},
  year={2021},
  volume={abs/2006.14999}
}
The Metropolis algorithm is arguably the most fundamental Markov chain Monte Carlo (MCMC) method. But the algorithm is not guaranteed to converge to the desired distribution in the case of multivariate binary distributions (e.g., Ising models or stochastic neural networks such as Boltzmann machines) if the variables (sites or neurons) are updated in a fixed order, a setting commonly used in practice. The reason is that the corresponding Markov chain may not be irreducible. We propose a modified… 

References

SHOWING 1-10 OF 29 REFERENCES

The flip-the-state transition operator for restricted Boltzmann machines

A Metropolis-type MCMC algorithm relying on a transition operator maximizing the probability of state changes that induces an irreducible, aperiodic, and hence properly converging Markov chain, also for the typically used periodic update schemes.

A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines

Convergence rates of the Gibbs sampler, the Metropolis algorithm and other single-site updating dynamics

Sampling from a Markov random field II can be performed efficiently via Monta Carlo methods by simulating a Markov chain that converges weakly to II. We consider a class of local updating dynamics

Optimum Monte-Carlo sampling using Markov chains

SUMMARY The sampling method proposed by Metropolis et al. (1953) requires the simulation of a Markov chain with a specified 7i as its stationary distribution. Hastings (1970) outlined a general

Probabilistic Inference Using Markov Chain Monte Carlo Methods

The role of probabilistic inference in artificial intelligence is outlined, the theory of Markov chains is presented, and various Markov chain Monte Carlo algorithms are described, along with a number of supporting techniques.

Test of the Monte Carlo Method: Fast Simulation of a Small Ising Lattice

A very fast stochastic procedure is used to generate samples of configurations of a 4 × 4 periodic Ising lattice in zero field and the data give information about the Monte Carlo method itself, especially its rate of convergence.

Layerwise Systematic Scan: Deep Boltzmann Machines and Beyond

It is shown that the Gibbs sampler with a layerwise alternating scan order has its relaxation time no larger than that of a random-update Gibbs sampling (in terms of variable updates) and this result implies a comparison on the mixing times.

When is Eaton’s Markov chain irreducible?

Consider a parametric statistical model P(dx\0) and an improper prior distribution v(d0) that together yield a (proper) formal posterior distribution Q(d6\x). The prior is called strongly admissible

Learning in Markov Random Fields using Tempered Transitions

This paper shows that using MCMC operators based on tempered transitions enables the stochastic approximation algorithm to better explore highly multimodal distributions, which considerably improves parameter estimates in large, densely-connected MRF's.

Training restricted Boltzmann machines: An introduction