Sparse Overcomplete Latent Variable Decomposition of Counts Data


An important problem in many fields is the analysis of counts data to extract meaningful latent components. Methods like Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) have been proposed for this purpose. However, they are limited in the number of components they can extract and lack an explicit provision to control the " expressiveness " of the extracted components. In this paper, we present a learning formulation to address these limitations by employing the notion of sparsity. We start with the PLSA framework and use an entropic prior in a maximum a posteriori formulation to enforce sparsity. We show that this allows the extraction of overcomplete sets of latent components which better characterize the data. We present experimental evidence of the utility of such representations.

Extracted Key Phrases

7 Figures and Tables

Showing 1-9 of 9 references

What is the Goal of Sensory Coding? Neural Computation

  • Dj Field
  • 1994

J Skilling. Classic Maximum Entropy. In J Skilling, editor, Maximum Entropy and Bayesian Methods. Kluwer Academic

  • 1989
Showing 1-10 of 44 extracted citations


Citations per Year

76 Citations

Semantic Scholar estimates that this publication has received between 47 and 127 citations based on the available data.

See our FAQ for additional information.