FPGA Acceleration of Recurrent Neural Network Based Language Model

@article{Li2015FPGAAO,
  title={FPGA Acceleration of Recurrent Neural Network Based Language Model},
  author={Sicheng Li and Chunpeng Wu and Hai Li and Boxun Li and Yu Wang and Qinru Qiu},
  journal={2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines},
  year={2015},
  pages={111-118}
}
Recurrent neural network (RNN) based language model (RNNLM) is a biologically inspired model for natural language processing. It records the historical information through additional recurrent connections and therefore is very effective in capturing semantics of sentences. However, the use of RNNLM has been greatly hindered for the high computation cost in training. This work presents an FPGA implementation framework for RNNLM training acceleration. At architectural level, we improve the… CONTINUE READING
Highly Cited
This paper has 45 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 26 extracted citations

References

Publications referenced by this paper.
Showing 1-10 of 24 references

Similar Papers

Loading similar papers…