An Adaptive Interacting Wang–Landau Algorithm for Automatic Density Exploration

  title={An Adaptive Interacting Wang–Landau Algorithm for Automatic Density Exploration},
  author={Luke Bornn and Pierre E. Jacob and Pierre Del Moral and A. Doucet},
  journal={Journal of Computational and Graphical Statistics},
  pages={749 - 773}
  • L. Bornn, P. Jacob, +1 author A. Doucet
  • Published 18 September 2011
  • Computer Science, Mathematics
  • Journal of Computational and Graphical Statistics
While statisticians are well-accustomed to performing exploratory analysis in the modeling stage of an analysis, the notion of conducting preliminary general-purpose exploratory analysis in the Monte Carlo stage (or more generally, the model-fitting stage) of an analysis is an area that we feel deserves much further attention. Toward this aim, this article proposes a general-purpose algorithm for automatic density exploration. The proposed exploration algorithm combines and expands upon… Expand
Wang-Landau algorithm: An adapted random walk to boost convergence
This work proposes an efficient random walk that uses geometrical information to circumvent the following inherent difficulties: avoiding overstepping strata, toning down concentration phenomena in high-dimensional spaces, and accommodating multidimensional distributions. Expand
In search of lost mixing time: adaptive Markov chain Monte Carlo schemes for Bayesian variable selection with very large p
The availability of data sets with large numbers of variables is rapidly increasing. The effective application of Bayesian variable selection methods for regression with these data sets has provedExpand
Parallel and interacting stochastic approximation annealing algorithms for global optimisation
The proposed PISAA algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self- adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. Expand
The Wang-Landau Algorithm as Stochastic Optimization and its Acceleration
The optimization formulation provides another way to establish the convergence rate of the Wang-Landau algorithm, by exploiting the fact that almost surely the density estimates remain in a compact set, upon which the objective function is strongly convex. Expand
Safe adaptive importance sampling: A mixture approach
This paper investigates adaptive importance sampling algorithms for which the policy, the sequence of distributions used to generate the particles, is a mixture distribution between a flexible kernelExpand
A Framework for Adaptive MCMC Targeting Multimodal Distributions
We propose a new Monte Carlo method for sampling from multimodal distributions. The idea of this technique is based on splitting the task into two: finding the modes of a target distribution $\pi$Expand
An Adaptive Parallel Tempering Algorithm
Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail.Expand
Chapter 12 PAWL-Forced Simulated Tempering
In this short note, we show how the parallel adaptive Wang–Landau (PAWL) algorithm of Bornn et al. (J Comput Graph Stat, to appear) can be used to automate and improve simulated tempering algorithms.Expand
Collective sampling through a Metropolis-Hastings like method: kinetic theory and numerical experiments.
The classical Metropolis-Hastings algorithm provides a simple method to construct a Markov Chain with an arbitrary stationary measure. In order to implement Monte Carlo methods, an elementaryExpand
Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities
Numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. Expand


A Generalized Wang–Landau Algorithm for Monte Carlo Computation
Inference for a complex system with a rough energy landscape is a central topic in Monte Carlo computation. Motivated by the successes of the Wang–Landau algorithm in discrete systems, we generalizeExpand
The Wang-Landau algorithm in general state spaces: Applications and convergence analysis
The Wang-Landau algorithm (Wang and Landau (2001)) is a recent Monte Carlo method that has generated much interest in the Physics literature due to some spectacular simulation performances. TheExpand
Improving SAMC using smoothing methods: Theory and applications to Bayesian model selection problems
Stochastic approximation Monte Carlo (SAMC) has recently been proposed by Liang, Liu and Carroll [J. Amer. Statist. Assoc. 102 (2007) 305-320] as a general simulation and optimization algorithm. InExpand
Learn From Thy Neighbor: Parallel-Chain and Regional Adaptive MCMC
Starting with the seminal paper of Haario, Saksman, and Tamminen (Haario, Saksman, and Tamminen 2001), a substantial amount of work has been done to validate adaptive Markov chain Monte CarloExpand
Stochastic Approximation in Monte Carlo Computation
The Wang–Landau (WL) algorithm is an adaptive Markov chain Monte Carlo algorithm used to calculate the spectral density for a physical system. A remarkable feature of the WL algorithm is that it isExpand
Free energy methods for Bayesian inference: efficient exploration of univariate Gaussian mixture posteriors
This work uses adaptive biasing Markov chain algorithms which adapt their targeted invariant distribution on the fly, in order to overcome sampling barriers along the chosen reaction coordinate, and shows in particular that the hyper-parameter that determines the order of magnitude of the variance of each component is both a convenient and an efficient reaction coordinate. Expand
An adaptive Metropolis algorithm
A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of theExpand
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
Abstract Suppose that one wishes to sample from the density π(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution π(u|x) can be defined, giving theExpand
Interacting multiple try algorithms with different proposal distributions
A new class of interacting Markov chain Monte Carlo algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler and the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms. Expand
Adaptive Markov Chain Monte Carlo through Regeneration
Abstract Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution π. This is done by calculating averages over the sample path of aExpand