Refine bigram PLSA model by assigning latent topics unevenly

@article{Nie2007RefineBP,
  title={Refine bigram PLSA model by assigning latent topics unevenly},
  author={Jiazhong Nie and Runxin Li and Dingsheng Luo and Xihong Wu},
  journal={2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)},
  year={2007},
  pages={141-146}
}
As an important component in many speech and language processing applications, statistical language model has been widely investigated. The bigram topic model, which combines advantages of both the traditional n-gram model and the topic model, turns out to be a promising language modeling approach. However, the original bigram topic model assigns the same topic number for each context word but ignores the fact that there are different complexities to the latent semantics of context words, we… CONTINUE READING
10 Citations
11 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 10 extracted citations

Similar Papers

Loading similar papers…