Feedforward Sequential Memory Networks: A New Structure to Learn Long-term Dependency

@article{Zhang2015FeedforwardSM,
  title={Feedforward Sequential Memory Networks: A New Structure to Learn Long-term Dependency},
  author={Shiliang Zhang and Cong Liu and Hui Jiang and Si Wei and Li-Rong Dai and Yu Hu},
  journal={CoRR},
  year={2015},
  volume={abs/1512.08301}
}
In this paper, we propose a novel neural network structure, namely feedforward sequential memory networks (FSMN), to model long-term dependency in time series without using recurrent feedback. The proposed FSMN is a standard fully-connected feedforward neural network equipped with some learnable memory blocks in its hidden layers. The memory blocks use a tapped-delay line structure to encode the long context information into a fixed-size representation as short-term memory mechanism. We have… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 36 CITATIONS

Hybrid Lstm-Fsmn Networks for Acoustic Modeling

  • 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2018
VIEW 9 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Improving Very Deep Time-Delay Neural Network With Vertical-Attention For Effectively Training CTC-Based ASR Systems

  • 2018 IEEE Spoken Language Technology Workshop (SLT)
  • 2018
VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Gated Recurrent Unit Based Acoustic Modeling with Future Context

VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Improving CTC-based Acoustic Model with Very Deep Residual Time-delay Neural Networks

  • Interspeech
  • 2018
VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Multi Scale Feedback Connection for Noise Robust Acoustic Modeling

  • 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2018
VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Bayesian adaptation and combination of deep models for automatic speech recognition

VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Feedforward sequential memory networks based encoder-decoder model for machine translation

  • 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
  • 2017
VIEW 7 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Tailoring an Interpretable Neural Language Model

  • IEEE/ACM Transactions on Audio, Speech, and Language Processing
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 47 REFERENCES

Extensions of recurrent neural network language model

  • 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2011
VIEW 10 EXCERPTS
HIGHLY INFLUENTIAL

The new HOPE way to learn neural networks

S. Zhang, H. Jiang, L. Dai
  • In Deep Learning Workshop at ICML,
  • 2015
VIEW 9 EXCERPTS
HIGHLY INFLUENTIAL

Similar Papers

Loading similar papers…