• Corpus ID: 88521229

Average of Recentered Parallel MCMC for Big Data

@article{Wu2019AverageOR,
  title={Average of Recentered Parallel MCMC for Big Data},
  author={Changye Wu and Christian P. Robert},
  journal={arXiv: Computation},
  year={2019}
}
In big data context, traditional MCMC methods, such as Metropolis-Hastings algorithms and hybrid Monte Carlo, scale poorly because of their need to evaluate the likelihood over the whole data set at each iteration. In order to resurrect MCMC methods, numerous approaches belonging to two categories: divide-and-conquer and subsampling, are proposed. In this article, we study the parallel MCMC and propose a new combination method in the divide-and-conquer framework. Compared with some parallel… 
3 Citations

Figures and Tables from this paper

Divide and Recombine for Large and Complex Data: Model Likelihood Functions Using MCMC and TRMM Big Data Analysis

An innovate D\&R procedure is proposed to compute likelihood functions of data-model (DM) parameters for big data by fitting the density to MCMC draws from each subset DM likelihood function, and then the fitted densities are recombined.

Modeling Network Populations via Graph Distances

A new class of models for multiple networks to parameterize a distribution on labeled graphs in terms of a Fréchet mean graph and a parameter that controls the concentration of this distribution about its mean is introduced.

A Survey of Bayesian Statistical Approaches for Big Data

The question of whether focusing only on improving computational algorithms and infrastructure will be enough to face the challenges of Big Data is addressed.

References

SHOWING 1-10 OF 16 REFERENCES

Parallelizing MCMC with Random Partition Trees

A new EP-MCMC algorithm PART is proposed that applies random partition trees to combine the subset posterior draws, which is distribution-free, easy to re-sample from and can adapt to multiple scales.

On Markov chain Monte Carlo methods for tall data

An original subsampling-based approach is proposed which samples from a distribution provably close to the posterior distribution of interest, yet can require less than $O(n)$ data point likelihood evaluations at each iteration for certain statistical models in favourable scenarios.

Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach

This paper describes a methodology that aims to scale up the Metropolis-Hastings (MH) algorithm by proposing an approximate implementation of the accept/reject step of MH that only requires evaluating the likelihood of a random subset of the data, yet is guaranteed to coincide with the accepted step based on the full dataset with a probability superior to a user-specified tolerance level.

Parallelizing MCMC via Weierstrass Sampler

This article proposes a new Weierstrass sampler for parallel MCMC based on independent subsets that approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency.

Asymptotically Exact, Embarrassingly Parallel MCMC

This paper presents a parallel Markov chain Monte Carlo (MCMC) algorithm in which subsets of data are processed independently, with very little communication, and proves that it generates asymptotically exact samples and empirically demonstrate its ability to parallelize burn-in and sampling in several models.

Expectation Propagation as a Way of Life

EP is revisited as a prototype for scalable algorithms that partition big datasets into many parts and analyze each part in parallel to perform inference of shared parameters to be particularly efficient for hierarchical models.

WASP: Scalable Bayes via barycenters of subset posteriors

The Wasserstein posterior (WASP) has an atomic form, facilitating straightforward estimation of posterior summaries of functionals of interest and theoretical justification in terms of posterior consistency and algorithm eciency is provided.

Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget

This work introduces an approximate MH rule based on a sequential hypothesis test that allows us to accept or reject samples with high confidence using only a fraction of the data required for the exact MH rule.

Stochastic Gradient Hamiltonian Monte Carlo

A variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution is introduced.

Patterns of Scalable Bayesian Inference

This paper seeks to identify unifying principles, patterns, and intuitions for scaling Bayesian inference by reviewing existing work on utilizing modern computing resources with both MCMC and variational approximation techniques.