Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling

@article{Sa2016EnsuringRM,
  title={Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling},
  author={C. D. Sa and Christopher R{\'e} and K. Olukotun},
  journal={JMLR workshop and conference proceedings},
  year={2016},
  volume={48},
  pages={
          1567-1576
        }
}
  • C. D. Sa, Christopher Ré, K. Olukotun
  • Published 2016
  • Computer Science, Mathematics, Medicine
  • JMLR workshop and conference proceedings
  • Gibbs sampling is a Markov chain Monte Carlo technique commonly used for estimating marginal distributions. To speed up Gibbs sampling, there has recently been interest in parallelizing it by executing asynchronously. While empirical results suggest that many models can be efficiently sampled asynchronously, traditional Markov chain analysis does not apply to the asynchronous case, and thus asynchronous Gibbs sampling is poorly understood. In this paper, we derive a better understanding of the… CONTINUE READING

    Figures and Topics from this paper.

    Paper Mentions

    Asynchronous Gibbs Sampling
    11
    Techniques for proving Asynchronous Convergence results for Markov Chain Monte Carlo methods
    2
    Minibatch Gibbs Sampling on Large Graphical Models
    11
    Fully-Asynchronous Distributed Metropolis Sampler with Optimal Speedup
    Distributed Metropolis Sampler with Optimal Parallelism.
    1
    HOGWILD!-Gibbs can be PanAccurate
    5
    Anytime Monte Carlo
    6
    Patterns of Scalable Bayesian Inference
    35

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 51 REFERENCES
    Asynchronous Distributed Gibbs Sampling
    12
    Analyzing Hogwild Parallel Gaussian Gibbs Sampling
    44
    Parallel Gibbs Sampling: From Colored Fields to Thin Junction Trees
    136
    Rapidly Mixing Markov Chains: A Comparison of Techniques (A Survey)
    29
    Distributed Inference for Latent Dirichlet Allocation
    234
    Markov chains and mixing times
    790
    Asynchronous Distributed Learning of Topic Models
    161