Continuous space language models

@article{Schwenk2007ContinuousSL,
  title={Continuous space language models},
  author={Holger Schwenk},
  journal={Computer Speech & Language},
  year={2007},
  volume={21},
  pages={492-518}
}
This paper describes the use of a neural network language model for large vocabulary continuous speech recognition. The underlying idea of this approach is to attack the data sparseness problem by performing the language model probability estimation in a continuous space. Highly efficient learning algorithms are described that enable the use of training corpora of several hundred million words. It is also shown that this approach can be incorporated into a large vocabulary continuous speech… CONTINUE READING
BETA

From This Paper

Figures, tables, results, and topics from this paper.

Citations

Publications citing this paper.
SHOWING 1-10 OF 338 CITATIONS, ESTIMATED 52% COVERAGE

654 Citations

050100'08'11'14'17
Citations per Year
Semantic Scholar estimates that this publication has 654 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
SHOWING 1-10 OF 59 REFERENCES

Quick transcription and automatic segmentation of the fisher

  • O. Kimball, Kao, C.-L, T. Arvizo, J. Makhoul, R. Iyer
  • Transactions on Acoustics, Speech, and Signal…
  • 2004
Highly Influential
8 Excerpts

Neural Networks for Pattern Recognition

  • C. V4153–4156. Bishop
  • International Conference on Acoustics,
  • 1995
Highly Influential
3 Excerpts

Natural language processing with modular PDP networks and distributed lexicon

  • F. Morin, Y. Bengio
  • Cognitive Science
  • 2005
1 Excerpt

Similar Papers

Loading similar papers…