DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification

Abstract

Probabilistic topic models have become popular as methods for dimensionality reduction in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood or Bayesian methods. In this paper, we discuss an alternative: a discriminative framework in which we assume that supervised side information is present, and in which we wish to take that side information into account in finding a reduced dimensionality representation. Specifically, we present DiscLDA, a discriminative variation on Latent Dirichlet Allocation (LDA) in which a class-dependent linear transformation is introduced on the topic mixture proportions. This parameter is estimated by maximizing the conditional likelihood. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroups document classification task and show how our model can identify shared topics across classes as well as class-dependent topics.

Extracted Key Phrases

6 Figures and Tables

0204060200920102011201220132014201520162017
Citations per Year

334 Citations

Semantic Scholar estimates that this publication has 334 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{LacosteJulien2008DiscLDADL, title={DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification}, author={Simon Lacoste-Julien and Fei Sha and Michael I. Jordan}, booktitle={NIPS}, year={2008} }