Recurrent neural network based language model

@inproceedings{Mikolov2010RecurrentNN,
  title={Recurrent neural network based language model},
  author={Tomas Mikolov and Martin Karafi{\'a}t and Luk{\'a}s Burget and Jan {\vC}ernock{\'y} and Sanjeev Khudanpur},
  booktitle={INTERSPEECH},
  year={2010}
}
A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM.

Citations

Publications citing this paper.
SHOWING 1-10 OF 2,541 CITATIONS

AppUsage2Vec: Modeling Smartphone App Usage for Prediction

  • 2019 IEEE 35th International Conference on Data Engineering (ICDE)
  • 2019
VIEW 9 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

RNN-Stega: Linguistic Steganography Based on Recurrent Neural Networks

  • IEEE Transactions on Information Forensics and Security
  • 2019
VIEW 9 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

A syntactic path-based hybrid neural network for negation scope detection

  • Frontiers of Computer Science
  • 2018
VIEW 5 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

Deep Generative Modeling with Applications in Semi-Supervised Learning

VIEW 9 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

1986
2019

CITATION STATISTICS

  • 348 Highly Influenced Citations

  • Averaged 525 Citations per year from 2017 through 2019

References

Publications referenced by this paper.
SHOWING 1-10 OF 14 REFERENCES

Supervised Discriminative Training of Statistical Language Models

Puyang Xu, Damianos Karakos, Sanjeev Khudanpur. Self
  • ASRU
  • 2009
VIEW 12 EXCERPTS
HIGHLY INFLUENTIAL

A Guide to Recurrent Neural Networks and Backpropagation

Mikael Bodén
  • In the Dallas project,
  • 2002
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL

Fast Text Compression with Neural Networks

  • FLAIRS Conference
  • 2000
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL

Neural network based language models for highly inflective languages

  • 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • 2009
VIEW 2 EXCERPTS