Unbiased Monte Carlo Cluster Updates with Autoregressive Neural Networks

  title={Unbiased Monte Carlo Cluster Updates with Autoregressive Neural Networks},
  author={Dian Wu and Riccardo Rossi and Giuseppe Carleo},
Efficient sampling of complex high-dimensional probability densities is a central task in computational science. Machine Learning techniques based on autoregressive neural networks have been recently shown to provide good approximations of probability distributions of interest in physics. In this work, we propose a systematic way to remove the intrinsic bias associated with these variational approximations, combining it with Markov-chain Monte Carlo in an automatic scheme to efficiently… Expand
3 Citations

Figures from this paper

Analysis of autocorrelation times in Neural Markov Chain Monte Carlo simulations
A deepened study of autocorrelations in Neural Markov Chain Monte Carlo simulations, a version of the traditional Metropolis algorithm which employs neural networks to provide independent proposals, and a scheme which incorporates partial heat-bath updates is proposed. Expand
Sampling Lattices in Semi-Grand Canonical Ensemble with Autoregressive Machine Learning
Calculating thermodynamic potentials and observables efficiently and accurately is key for the application of statistical mechanics simulations to materials science. However, naive Monte CarloExpand
Efficient modeling of trivializing maps for lattice ϕ4 theory using normalizing flows: A first look at scalability
General-purpose Markov Chain Monte Carlo sampling algorithms suffer from a dramatic reduction in efficiency as the system being studied is driven towards a critical point through, for example, takingExpand


Solving Statistical Mechanics using Variational Autoregressive Networks
  • Dian Wu, Lei Wang, Pan Zhang
  • Physics, Computer Science
  • Physical review letters
  • 2019
This work proposes a general framework for solving statistical mechanics of systems with finite size using autoregressive neural networks, which computes variational free energy, estimates physical quantities such as entropy, magnetizations and correlations, and generates uncorrelated samples all at once. Expand
Flow-based generative models for Markov chain Monte Carlo in lattice field theory
A Markov chain update scheme using a machine-learned flow-based generative model is proposed for Monte Carlo sampling in lattice field theories and is compared with HMC and local Metropolis sampling for ϕ4 theory in two dimensions. Expand
The Neural Autoregressive Distribution Estimator
A new approach for modeling the distribution of high-dimensional vectors of discrete variables inspired by the restricted Boltzmann machine, which outperforms other multivariate binary distribution estimators on several datasets and performs similarly to a large (but intractable) RBM. Expand
Cluster Monte Carlo algorithms
In recent years, a better understanding of the Monte Carlo method has provided us with many new techniques in different areas of statistical physics. Of particular interest are so called clusterExpand
Neural Importance Sampling
This work introduces piecewise-polynomial coupling transforms that greatly increase the modeling power of individual coupling layers and derives a gradient-descent-based optimization for the Kullback-Leibler and the χ2 divergence for the specific application of Monte Carlo integration with unnormalized stochastic estimates of the target distribution. Expand
Generalizing Hamiltonian Monte Carlo with Neural Networks
This work presents a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution, and releases an open source TensorFlow implementation. Expand
Neural Network Renormalization Group
We present a variational renormalization group (RG) approach based on a reversible generative model with hierarchical architecture. The model performs hierarchical change-of-variables transformationsExpand
Accelerated Monte Carlo simulations with restricted Boltzmann machines
Despite their exceptional flexibility and popularity, the Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy toExpand
Self-learning Monte Carlo method
A general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), is proposed, in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Expand
Neural networks-based variationally enhanced sampling
By combining a variational approach with deep learning, much progress can be made in extending the scope of atomistic-based simulations and bridges the fields of enhanced sampling and machine learning. Expand