ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

@article{Shafiee2016ISAACAC,
  title={ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars},
  author={Ali Shafiee and Anirban Nag and Naveen Muralimanohar and Rajeev Balasubramonian and John Paul Strachan and Miao Hu and R. Stanley Williams and Vivek Srikumar},
  journal={2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)},
  year={2016},
  pages={14-26}
}
A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs). These algorithms typically involve a large number of multiply-accumulate (dot-product) operations. A recent project, DaDianNao, adopts a near data processing approach, where a specialized neural functional unit performs all the digital arithmetic operations and receives input weights from adjacent eDRAM banks… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 337 CITATIONS, ESTIMATED 92% COVERAGE

Computation-in-Memory based on Memristive Devices

VIEW 33 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

DigitalPIM: Digital-based Processing In-Memory for Big Data Acceleration

  • ACM Great Lakes Symposium on VLSI
  • 2019
VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Low Bit-width Convolutional Neural Network on RRAM

VIEW 10 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

MAX2: An ReRAM-Based Neural Network Accelerator That Maximizes Data Reuse and Area Utilization

  • IEEE Journal on Emerging and Selected Topics in Circuits and Systems
  • 2019
VIEW 17 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

3D Stacked High Throughput Pixel Parallel Image Sensor with Integrated ReRAM Based Neural Accelerator

  • 2018 IEEE SOI-3D-Subthreshold Microelectronics Technology Unified Conference (S3S)
  • 2018
VIEW 10 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

RNSnet: In-Memory Neural Network Acceleration Using Residue Number System

  • 2018 IEEE International Conference on Rebooting Computing (ICRC)
  • 2018
VIEW 11 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

XNOR-POP: A processing-in-memory architecture for binary Convolutional Neural Networks in Wide-IO2 DRAMs

  • 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED)
  • 2017
VIEW 10 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2016
2019

CITATION STATISTICS

  • 74 Highly Influenced Citations

  • Averaged 107 Citations per year from 2017 through 2019

  • 12% Increase in citations per year in 2019 over 2018

References

Publications referenced by this paper.
SHOWING 1-3 OF 3 REFERENCES

DaDianNao: A Machine-Learning Supercomputer

  • 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture
  • 2014
VIEW 7 EXCERPTS
HIGHLY INFLUENTIAL