# An Adaptive Interacting Wang–Landau Algorithm for Automatic Density Exploration

@article{Bornn2011AnAI,
title={An Adaptive Interacting Wang–Landau Algorithm for Automatic Density Exploration},
author={Luke Bornn and Pierre E. Jacob and Pierre Del Moral and A. Doucet},
journal={Journal of Computational and Graphical Statistics},
year={2011},
volume={22},
pages={749 - 773}
}
• L. Bornn, +1 author A. Doucet
• Published 18 September 2011
• Computer Science, Mathematics
• Journal of Computational and Graphical Statistics
While statisticians are well-accustomed to performing exploratory analysis in the modeling stage of an analysis, the notion of conducting preliminary general-purpose exploratory analysis in the Monte Carlo stage (or more generally, the model-fitting stage) of an analysis is an area that we feel deserves much further attention. Toward this aim, this article proposes a general-purpose algorithm for automatic density exploration. The proposed exploration algorithm combines and expands upon…
Wang-Landau algorithm: An adapted random walk to boost convergence
• Computer Science
J. Comput. Phys.
• 2020
This work proposes an efficient random walk that uses geometrical information to circumvent the following inherent difficulties: avoiding overstepping strata, toning down concentration phenomena in high-dimensional spaces, and accommodating multidimensional distributions.
In search of lost mixing time: adaptive Markov chain Monte Carlo schemes for Bayesian variable selection with very large p
• Mathematics
• 2017
The availability of data sets with large numbers of variables is rapidly increasing. The effective application of Bayesian variable selection methods for regression with these data sets has proved
Parallel and interacting stochastic approximation annealing algorithms for global optimisation
• Computer Science, Mathematics
Stat. Comput.
• 2017
The proposed PISAA algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self- adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used.
The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration
• Medicine, Computer Science
Physical review. E
• 2020
The optimization formulation provides another way to establish the convergence rate of the Wang-Landau algorithm, by exploiting the fact that almost surely the density estimates remain in a compact set, upon which the objective function is strongly convex.
Safe adaptive importance sampling: A mixture approach
• Mathematics
• 2020
This paper investigates adaptive importance sampling algorithms for which the policy, the sequence of distributions used to generate the particles, is a mixture distribution between a flexible kernel
A Framework for Adaptive MCMC Targeting Multimodal Distributions
• Mathematics
• 2018
We propose a new Monte Carlo method for sampling from multimodal distributions. The idea of this technique is based on splitting the task into two: finding the modes of a target distribution $\pi$
An Adaptive Parallel Tempering Algorithm
• Mathematics
• 2012
Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail.
Chapter 12 PAWL-Forced Simulated Tempering
In this short note, we show how the parallel adaptive Wang–Landau (PAWL) algorithm of Bornn et al. (J Comput Graph Stat, to appear) can be used to automate and improve simulated tempering algorithms.
Boost your favorite Markov Chain Monte Carlo sampler using Kac's theorem: the Kick-Kac teleportation algorithm
• Computer Science, Mathematics
ArXiv
• 2022
A novel class of non-reversible Markov chains is introduced, each chain being defined on an extended state space and having an invariant probability measure admitting π as a marginal distribution.
Collective sampling through a Metropolis-Hastings like method: kinetic theory and numerical experiments.
• Mathematics
• 2019
The classical Metropolis-Hastings algorithm provides a simple method to construct a Markov Chain with an arbitrary stationary measure. In order to implement Monte Carlo methods, an elementary

## References

SHOWING 1-10 OF 81 REFERENCES
A Generalized Wang–Landau Algorithm for Monte Carlo Computation
Inference for a complex system with a rough energy landscape is a central topic in Monte Carlo computation. Motivated by the successes of the Wang–Landau algorithm in discrete systems, we generalize
The Wang-Landau algorithm in general state spaces: Applications and convergence analysis
• Mathematics
• 2010
The Wang-Landau algorithm (Wang and Landau (2001)) is a recent Monte Carlo method that has generated much interest in the Physics literature due to some spectacular simulation performances. The
Improving SAMC using smoothing methods: Theory and applications to Bayesian model selection problems
Stochastic approximation Monte Carlo (SAMC) has recently been proposed by Liang, Liu and Carroll [J. Amer. Statist. Assoc. 102 (2007) 305-320] as a general simulation and optimization algorithm. In
Learn From Thy Neighbor: Parallel-Chain and Regional Adaptive MCMC
• Mathematics
• 2009
Starting with the seminal paper of Haario, Saksman, and Tamminen (Haario, Saksman, and Tamminen 2001), a substantial amount of work has been done to validate adaptive Markov chain Monte Carlo
Stochastic Approximation in Monte Carlo Computation
• Mathematics
• 2007
The Wang–Landau (WL) algorithm is an adaptive Markov chain Monte Carlo algorithm used to calculate the spectral density for a physical system. A remarkable feature of the WL algorithm is that it is
Free energy methods for Bayesian inference: efficient exploration of univariate Gaussian mixture posteriors
• Mathematics, Computer Science
Stat. Comput.
• 2012
This work uses adaptive biasing Markov chain algorithms which adapt their targeted invariant distribution on the fly, in order to overcome sampling barriers along the chosen reaction coordinate, and shows in particular that the hyper-parameter that determines the order of magnitude of the variance of each component is both a convenient and an efficient reaction coordinate.
An adaptive Metropolis algorithm
• Mathematics
• 2001
A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of the
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
Abstract Suppose that one wishes to sample from the density π(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution π(u|x) can be defined, giving the
Interacting multiple try algorithms with different proposal distributions
• Computer Science, Mathematics
Stat. Comput.
• 2013
A new class of interacting Markov chain Monte Carlo algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler and the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms.
Adaptive Markov Chain Monte Carlo through Regeneration
• Mathematics
• 1998
Abstract Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution π. This is done by calculating averages over the sample path of a