Latent Dirichlet Allocation


We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

Extracted Key Phrases

12 Figures and Tables

Showing 1-10 of 9,521 extracted citations
Citations per Year

16,804 Citations

Semantic Scholar estimates that this publication has received between 16,220 and 17,411 citations based on the available data.

See our FAQ for additional information.