Scaling Deep Learning on Multiple In-Memory Processors

@inproceedings{Xu2015ScalingDL,
  title={Scaling Deep Learning on Multiple In-Memory Processors},
  author={Lifan Xu and Dong Ping Zhang and Nuwan Jayasena},
  year={2015}
}
Deep learning methods are proven to be state-of-theart in addressing many challenges in machine learning domains. However, it comes at the cost of high computational requirements and energy consumption. The emergence of Processing In Memory (PIM) with diestacking technology presents an opportunity to speed up deep learning computation and reduce energy consumption by providing low-cost high-bandwidth memory accesses. PIM uses 3D die stacking to move computations closer to memory and therefore… CONTINUE READING
4 Citations
12 References
Similar Papers

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…