An energy-efficient memory-based high-throughput VLSI architecture for convolutional networks

Abstract

In this paper, an energy efficient, memory-intensive, and high throughput VLSI architecture is proposed for convolutional networks (C-Net) by employing compute memory (CM) [1], where computation is deeply embedded into the memory (SRAM). Behavioral models incorporating CM's circuit non-idealities and energy models in 45nm SOI CMOS are presented. System-level simulations using these models demonstrate that the probability of handwritten digit recognition P<sub>r</sub> &gt; 0.99 can be achieved using the MNIST database [2], along with a 24.5&#x00D7; reduced energy delay product, a 5.0&#x00D7; reduced energy, and a 4.9&#x00D7; higher throughput as compared to the conventional system.

DOI: 10.1109/ICASSP.2015.7178127

7 Figures and Tables

Cite this paper

@article{Kang2015AnEM, title={An energy-efficient memory-based high-throughput VLSI architecture for convolutional networks}, author={Mingu Kang and Sujan K. Gonugondla and Min-Sun Keel and Naresh R. Shanbhag}, journal={2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, year={2015}, pages={1037-1041} }