BinarEye: An always-on energy-accuracy-scalable binary CNN processor with all memory on chip in 28nm CMOS

@article{Moons2018BinarEyeAA,
  title={BinarEye: An always-on energy-accuracy-scalable binary CNN processor with all memory on chip in 28nm CMOS},
  author={Bert Moons and D. Bankman and Lita Yang and B. Murmann and M. Verhelst},
  journal={2018 IEEE Custom Integrated Circuits Conference (CICC)},
  year={2018},
  pages={1-4}
}
This paper introduces BinarEye: the first digital processor for always-on Binary Convolutional Neural Networks. The chip maximizes data reuse through a Neuron Array exploiting local weight Flip-Flops. It stores full network models and feature maps and hence requires no off-chip bandwidth, which leads to a 230 lb-TOPS/W peak efficiency. Its 3-levels of flexibility — (a) weight reconfiguration, (b) a programmable network depth and (c) a programmable network width — allow trading energy for… Expand
40 Citations
An Always-On 3.8 $\mu$ J/86% CIFAR-10 Mixed-Signal Binary CNN Processor With All Memory on Chip in 28-nm CMOS
  • 19
A 617-TOPS/W All-Digital Binary Neural Network Accelerator in 10-nm FinFET CMOS
MorphIC: A 65-nm 738k-Synapse/mm$^2$ Quad-Core Binary-Weight Digital Neuromorphic Processor With Stochastic Spike-Driven Online Learning
  • 16
  • PDF
CONV-SRAM: An Energy-Efficient SRAM With In-Memory Dot-Product Computation for Low-Power Convolutional Neural Networks
  • 36
  • Highly Influenced
  • PDF
A 90nm 103.14 TOPS/W Binary-Weight Spiking Neural Network CMOS ASIC for Real-Time Object Classification
  • Highly Influenced
Bit Error Tolerance of a CIFAR-10 Binarized Convolutional Neural Network Processor
  • 13
CUTIE: Beyond PetaOp/s/W Ternary DNN Inference Acceleration with Better-than-Binary Energy Efficiency
  • Highly Influenced
  • PDF
A 65-nm 738k-Synapse/mm2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning
  • 3
ChewBaccaNN: A Flexible 223 TOPS/W BNN Accelerator
  • 3
  • PDF
...
1
2
3
4
...

References

SHOWING 1-10 OF 11 REFERENCES
An always-on 3.8μJ/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28nm CMOS
  • 121
14.5 Envision: A 0.26-to-10TOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable Convolutional Neural Network processor in 28nm FDSOI
  • 226
  • PDF
14.6 A 0.62mW ultra-low-power convolutional-neural-network face-recognition processor and a CIS integrated with always-on haar-like face detector
  • 66
  • Highly Influential
  • PDF
Convolutional networks for fast, energy-efficient neuromorphic computing
  • 436
  • Highly Influential
  • PDF
Binarized Neural Networks
  • 931
  • PDF
BRein memory: A 13-layer 4.2 K neuron/0.8 M synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm CMOS
  • 60
  • Highly Influential
Analog in-memory subthreshold deep neural network accelerator
  • 25
  • PDF
In-Memory Computation of a Machine-Learning Classifier in a Standard 6T SRAM Array
  • 143
  • Highly Influential
  • PDF
Convolutional-Neural- Network Face-Recognition Processor and a CIS Integrated with Always
  • On Haar- ISSCC tech. digest,
  • 2017
https://github.com/BertMoons Table 1: Comparison of Low-Precision CNN Processors on multiple benchmarks. * Core efficiency is without IO-power
  • Energy on Conv-layers only. α Numbers
  • 1024
...
1
2
...