Future vector enhanced LSTM language model for LVCSR

@article{Liu2017FutureVE,
  title={Future vector enhanced LSTM language model for LVCSR},
  author={Qi Liu and Yanmin Qian and Kai Yu},
  journal={2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
  year={2017},
  pages={104-110}
}
  • Qi Liu, Yanmin Qian, Kai Yu
  • Published 2017
  • Computer Science, Engineering
  • 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
  • Language models (LM) play an important role in large vocabulary continuous speech recognition (LVCSR. [...] Key Method In addition to the given history, the rest of the sequence will be also embedded by future vectors. This future vector can be incorporated with the LSTM LM, so it has the ability to model much longer term sequence level information. Experiments show that, the proposed new LSTM LM gets a better result on BLEU scores for long term sequence prediction. For the speech recognition rescoring…Expand Abstract
    1 Citations

    Figures, Tables, and Topics from this paper

    Noise Robust Speech Recognition on Aurora4 by Humans and Machines
    • 7
    • PDF

    References

    SHOWING 1-10 OF 40 REFERENCES
    Exploiting the succeeding words in recurrent neural network language models
    • 13
    • PDF
    Recurrent neural network language model adaptation for multi-genre broadcast speech recognition
    • 81
    • PDF
    Bidirectional recurrent neural network language models for automatic speech recognition
    • 40
    Recurrent neural network language model training with noise contrastive estimation for speech recognition
    • 71
    • PDF
    Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval
    • H. Palangi, L. Deng, +5 authors R. Ward
    • Computer Science
    • IEEE/ACM Transactions on Audio, Speech, and Language Processing
    • 2016
    • 502
    • PDF
    Sequence Level Training with Recurrent Neural Networks
    • 940
    • PDF
    On training bi-directional neural network language model with noise contrastive estimation
    • 13
    • PDF
    Listen, attend and spell: A neural network for large vocabulary conversational speech recognition
    • 1,101
    • PDF
    A study on effects of implicit and explicit language model information for DBLSTM-CTC based handwriting recognition
    • Qi Liu, Lijuan Wang, Q. Huo
    • Computer Science, Engineering
    • 2015 13th International Conference on Document Analysis and Recognition (ICDAR)
    • 2015
    • 23
    • PDF
    Minimum Translation Modeling with Recurrent Neural Networks
    • 28
    • PDF