Class-based language model adaptation using mixtures of word-class weights

Abstract

This paper describes the use of a weighted mixture of classbased n-gram language models to perform topic adaptation. By using a fixed class n-gram history and variable word-given-class probabilities we obtain large improvements in the performance of the class-based language model, giving it similar accuracy to a word n-gram model, and an associated small but statistically significant improvement when we interpolate with a word-based n-gram language model.

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Moore2000ClassbasedLM, title={Class-based language model adaptation using mixtures of word-class weights}, author={Gareth Moore and Steve J. Young}, booktitle={INTERSPEECH}, year={2000} }