Hardware accelerators for recurrent neural networks on FPGA

@article{Chang2017HardwareAF,
  title={Hardware accelerators for recurrent neural networks on FPGA},
  author={Andre Xian Ming Chang and Eugenio Culurciello},
  journal={2017 IEEE International Symposium on Circuits and Systems (ISCAS)},
  year={2017},
  pages={1-4}
}
Recurrent Neural Networks (RNNs) have the ability to retain memory and learn from data sequences, which are fundamental for real-time applications. RNN computations offer limited data reuse, which leads to high data traffic. This translates into high off-chip memory bandwidth or large internal storage requirement to achieve high performance. Exploiting parallelism in RNN computations are bounded by this two limiting factors, among other constraints present in embedded systems. Therefore… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 13 CITATIONS

References

Publications referenced by this paper.
SHOWING 1-10 OF 18 REFERENCES

Similar Papers

Loading similar papers…