Neurostream: Scalable and Energy Efficient Deep Learning with Smart Memory Cubes

@article{Azarkhish2018NeurostreamSA,
  title={Neurostream: Scalable and Energy Efficient Deep Learning with Smart Memory Cubes},
  author={Erfan Azarkhish and Davide Rossi and Igor Loi and Luca Benini},
  journal={IEEE Transactions on Parallel and Distributed Systems},
  year={2018},
  volume={29},
  pages={420-434}
}
High-performance computing systems are moving towards 2.5D and 3D memory hierarchies, based on High Bandwidth Memory (HBM) and Hybrid Memory Cube (HMC) to mitigate the main memory bottlenecks. This trend is also creating new opportunities to revisit near-memory computation. In this paper, we propose a flexible processor-in-memory (PIM) solution for scalable and energy-efficient execution of deep convolutional networks (ConvNets), one of the fastest-growing workloads for servers and high-end… CONTINUE READING
Highly Cited
This paper has 23 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 10 times over the past 90 days. VIEW TWEETS

Citations

Publications citing this paper.
Showing 1-10 of 17 extracted citations

References

Publications referenced by this paper.
Showing 1-10 of 70 references

Deep learning benchmarks of NVIDIA tesla P100 PCIe, tesla K80, and tesla M40 GPUs

  • J. Murphy
  • Jan. 2017, https://www. microway.com/hpc-tech…
  • 2017
Highly Influential
7 Excerpts

A taxonomy of deep convolutional neural nets for computer vision

  • S. Srinivas
  • Frontiers Robot. AI, vol. 2, 2016, Art. no. 36.
  • 2016
Highly Influential
5 Excerpts

A high-throughput neural network accelerator

  • T. Chen
  • IEEE Micro, vol. 35, no. 3, pp. 24–32, May 2015.
  • 2015
Highly Influential
9 Excerpts

Similar Papers

Loading similar papers…