ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

@article{Shafiee2016ISAACAC,
  title={ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars},
  author={A. Shafiee and Anirban Nag and N. Muralimanohar and Rajeev Balasubramonian and J. P. Strachan and Miao Hu and R. Williams and V. Srikumar},
  journal={2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)},
  year={2016},
  pages={14-26}
}
  • A. Shafiee, Anirban Nag, +5 authors V. Srikumar
  • Published 2016
  • Computer Science
  • 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)
  • A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs). These algorithms typically involve a large number of multiply-accumulate (dot-product) operations. A recent project, DaDianNao, adopts a near data processing approach, where a specialized neural functional unit performs all the digital arithmetic operations and receives input weights from adjacent eDRAM banks… CONTINUE READING
    673 Citations
    Trained Biased Number Representation for ReRAM-Based Neural Network Accelerators
    • 2
    • Highly Influenced
    A Versatile ReRAM-based Accelerator for Convolutional Neural Networks
    • 5
    • Highly Influenced
    Analog Weights in ReRAM DNN Accelerators
    • 12
    • PDF
    Input-Splitting of Large Neural Networks for Power-Efficient Accelerator with Resistive Crossbar Memory Array
    • 7
    • Highly Influenced
    PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning
    • 259
    • Highly Influenced
    • PDF
    PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-Efficient ReRAM
    • 6
    • PDF
    Making Memristive Neural Network Accelerators Reliable
    • 43
    • Highly Influenced
    • PDF
    Processing Convolutional Neural Networks on Cache
    MAX2: An ReRAM-Based Neural Network Accelerator That Maximizes Data Reuse and Area Utilization
    • 5
    • Highly Influenced
    Deep Learning Acceleration with Neuron-to-Memory Transformation
    • 4
    • Highly Influenced
    • PDF

    References

    Training and operation of an integrated neuromorphic network based on metal-oxide memristors
    • 1,242
    • Highly Influential
    • PDF