Corpus ID: 225040636

LSTM-LM with Long-Term History for First-Pass Decoding in Conversational Speech Recognition

@article{Chen2020LSTMLMWL,
  title={LSTM-LM with Long-Term History for First-Pass Decoding in Conversational Speech Recognition},
  author={X. Chen and S. Parthasarathy and W. Gale and S. Chang and Michael Zeng},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.11349}
}
LSTM language models (LSTM-LMs) have been proven to be powerful and yielded significant performance improvements over count based n-gram LMs in modern speech recognition systems. Due to its infinite history states and computational load, most previous studies focus on applying LSTM-LMs in the second-pass for rescoring purpose. Recent work shows that it is feasible and computationally affordable to adopt the LSTM-LMs in the first-pass decoding within a dynamic (or tree based) decoder framework… Expand

Tables from this paper

References

SHOWING 1-10 OF 29 REFERENCES
Real-Time One-Pass Decoder for Speech Recognition Using LSTM Language Models
  • 9
  • Highly Influential
  • PDF
Investigation on LSTM Recurrent N-gram Language Models for Speech Recognition
  • 18
  • PDF
Session-level Language Modeling for Conversational Speech
  • 10
  • PDF
Lattice decoding and rescoring with long-Span neural network language models
  • 43
  • PDF
LSTM Language Models for LVCSR in First-Pass Decoding and Lattice-Rescoring
  • 18
  • Highly Influential
  • PDF
Efficient lattice rescoring using recurrent neural network language models
  • 79
  • PDF
N-gram Language Modeling using Recurrent Neural Network Estimation
  • 31
  • PDF
Variance regularization of RNNLM for speech recognition
  • 16
Training Language Models for Long-Span Cross-Sentence Evaluation
  • 16
  • PDF
From Feedforward to Recurrent LSTM Neural Networks for Language Modeling
  • 281
...
1
2
3
...