Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

  title={Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding},
  author={G. Mesnil and Yann Dauphin and K. Yao and Yoshua Bengio and L. Deng and D. Hakkani-Tur and X. He and Larry Heck and G. Tur and Dong Yu and G. Zweig},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
  • G. Mesnil, Yann Dauphin, +8 authors G. Zweig
  • Published 2015
  • Computer Science
  • IEEE/ACM Transactions on Audio, Speech, and Language Processing
  • Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly… CONTINUE READING
    Label-Dependency Coding in Simple Recurrent Networks for Spoken Language Understanding
    ClockWork-RNN Based Architectures for Slot Filling
    Recurrent Neural Network Structured Output Prediction for Spoken Language Understanding
    Modeling with Recurrent Neural Networks for Open Vocabulary Slots
    A Comparison of Deep Learning Methods for Language Understanding
    Label-Dependencies Aware Recurrent Neural Networks
    Joint Online Spoken Language Understanding and Language Modeling With Recurrent Neural Networks
    A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding


    Publications referenced by this paper.
    Semantic Frame‐Based Spoken Language Understanding