Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation

Abstract

Hierarchical probabilistic modeling of discrete data has emerged as a powerful tool for text analysis. Posterior inference in such models is intractable, and practitioners rely on approximate posterior inference methods such as variational inference or Gibbs sampling. There has been much research in designing better approximations, but there is yet little theoretical understanding of which of the available techniques are appropriate, and in which data analysis settings. In this paper we provide the beginnings of such understanding. We analyze the improvement that the recently proposed collapsed variational inference (CVB) provides over mean field variational inference (VB) in latent Dirichlet allocation. We prove that the difference in the tightness of the bound on the likelihood of a document decreases asO(k−1)+ √ logm/m, where k is the number of topics in the model and m is the number of words in a document. As a consequence, the advantage of CVB over VB is lost for long documents but increases with the number of topics. We demonstrate empirically that the theory holds, using simulated text data and two text corpora. We provide practical guidelines for choosing an approximation.

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@inproceedings{Mukherjee2008RelativePG, title={Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation}, author={Indraneel Mukherjee and David M. Blei}, booktitle={NIPS}, year={2008} }