YodaNN: An Architecture for Ultralow Power Binary-Weight CNN Acceleration

@article{Andri2016YodaNNAA,
  title={YodaNN: An Architecture for Ultralow Power Binary-Weight CNN Acceleration},
  author={Renzo Andri and Lukas Cavigelli and Davide Rossi and Luca Benini},
  journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
  year={2016},
  volume={37},
  pages={48-60}
}
  • Renzo Andri, Lukas Cavigelli, +1 author Luca Benini
  • Published in
    IEEE Transactions on Computer…
    2016
  • Computer Science
  • Convolutional neural networks (CNNs) have revolutionized the world of computer vision over the last few years, pushing image classification beyond human accuracy. The computational effort of today’s CNNs requires power-hungry parallel processors or GP-GPUs. Recent developments in CNN accelerators for system-on-chip integration have reduced energy consumption significantly. Unfortunately, even these highly optimized devices are above the power envelope imposed by mobile and deeply embedded… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 73 CITATIONS

    EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators

    VIEW 6 EXCERPTS
    CITES BACKGROUND

    Hyperdrive: A Systolically Scalable Binary-Weight CNN Inference Engine for mW IoT End-Nodes

    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS

    A 34-FPS 698-GOP/s/W Binarized Deep Neural Network-Based Natural Scene Text Interpretation Accelerator for Mobile Edge Computing

    VIEW 4 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Hyperdrive: A Multi-Chip Systolically Scalable Binary-Weight CNN Inference Engine

    VIEW 10 EXCERPTS
    CITES BACKGROUND & METHODS

    Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?

    VIEW 5 EXCERPTS
    CITES BACKGROUND

    Ternary MobileNets via Per-Layer Hybrid Filter Banks

    VIEW 3 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    An Energy-efficient Reconfigurable Hybrid DNN Architecture for Speech Recognition with Approximate Computing

    • Bo Liu, Shisheng Guo, +4 authors Jun Yang
    • Computer Science
    • 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)
    • 2018
    VIEW 3 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2017
    2020

    CITATION STATISTICS

    • 4 Highly Influenced Citations

    • Averaged 23 Citations per year from 2017 through 2019

    • 100% Increase in citations per year in 2019 over 2018

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 49 REFERENCES

    14.1 A 126.1mW real-time natural UI/UX processor with embedded deep-learning core for low-power smart glasses

    VIEW 14 EXCERPTS
    HIGHLY INFLUENTIAL

    Origami: A 803-GOp/s/W Convolutional Network Accelerator

    VIEW 9 EXCERPTS

    4.6 A1.93TOPS/W scalable deep learning/inference processor with tetra-parallel MIMD architecture for big-data applications

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Accelerating real-time embedded scene labeling with convolutional networks

    VIEW 12 EXCERPTS

    14.6 A 1.42TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL