Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware

@article{Blouw2019BenchmarkingKS,
  title={Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware},
  author={Peter Blouw and Xuan Choo and Eric Hunsberger and Chris Eliasmith},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.01739}
}
Using Intel's Loihi neuromorphic research chip and ABR's Nengo Deep Learning toolkit, we analyze the inference speed, dynamic power consumption, and energy cost per inference of a two-layer neural network keyword spotter trained to recognize a single phrase. [...] Key Result Our results indicate that for this real-time inference application, Loihi outperforms all of these alternatives on an energy cost per inference basis while maintaining equivalent inference accuracy.Expand
Neuromorphic Hardware Accelerator for SNN Inference based on STT-RAM Crossbar Arrays
TLDR
The proposed STT-RAM based neurosynaptic core designed in 28 nm technology node has approximately 6× higher throughput per unit Watt and unit area than an equivalent SRAM based design and achieves ∼ 2× higher performance per Watt compared to other memristive neural network accelerator designs in the literature.
NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi
TLDR
NxTF: a programming interface derived from Keras and compiler optimized for mapping deep convolutional SNNs to the multi-core Intel Loihi architecture is developed, and NxTF on Deep Neural Networks trained directly on spikes as well as models converted from traditional DNNs are evaluated.
Event-Driven Signal Processing with Neuromorphic Computing Systems
  • Peter Blouw, C. Eliasmith
  • Computer Science
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
TLDR
This paper provides an overview of tools and methods for building applications that run on neuromorphic computing devices, and shows that replacing floating point operations in a conventional neural network with synaptic operations inA spiking neural network results in a roughly 4x energy reduction, with minimal performance loss.
NengoFPGA: an FPGA Backend for the Nengo Neural Simulator
TLDR
An embedded Python-capable PYNQ FPGA implementation supported with a Xilinx Vivado High-Level Synthesis (HLS) workflow that allows sub-millisecond implementation of adaptive neural networks with low-latency, direct I/O access to the physical world and a seamless and user-friendly extension to the neural compiler Python package Nengo.
Neural Network Acceleration and Voice Recognition with a Flash-based In-Memory Computing SoC
  • Liang Zhao, Shifan Gao, +5 authors Yi Zhao
  • Computer Science
    2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
  • 2021
TLDR
A fully integrated system-on-chip (SoC) design with embedded Flash memories as the neural network accelerator to enable efficient AI inference for resource-constrained voice recognition.
Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook
TLDR
This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware.
Comparing Loihi with a SpiNNaker 2 prototype on low-latency keyword spotting and adaptive robotic control
TLDR
This work highlights the benefit of a multiply-accumulate (MAC) array in the SpiNNaker 2 prototype which is ordinarily used in rate-based machine learning networks when employed in a neuromorphic, spiking context and shows better efficiency when high dimensional vector-matrix multiplication is involved.
Building a Comprehensive Neuromorphic Platform for Remote Computation
TLDR
This paper discusses methods, motivated by recent results, to produce a cohesive neuromorphic system that effectively integrates novel and traditional algorithms for context-driven remote computation.
μBrain: An Event-Driven and Fully Synthesizable Architecture for Spiking Neural Networks
The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is a candidate solution that can meet strict energy and cost
WaveSense: Efficient Temporal Convolutions with Spiking Neural Networks for Keyword Spotting
TLDR
The results show that the proposed network beats the state of the art of other spiking neural networks and reaches near state-of-the-art performance of artificial neural networks such as CNNs and LSTMs.
...
1
2
3
4
5
...

References

SHOWING 1-8 OF 8 REFERENCES
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
TLDR
Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Training Spiking Deep Networks for Neuromorphic Hardware
We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets,
NengoDL: Combining Deep Learning and Neuromorphic Modelling Methods
NengoDL is a software framework designed to combine the strengths of neuromorphic modelling and deep learning. NengoDL allows users to construct biologically detailed neural models, intermix those
TensorFlow: A system for large-scale machine learning
TLDR
The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.
Small-footprint keyword spotting using deep neural networks
TLDR
This application requires a keyword spotting system with a small memory footprint, low computational cost, and high precision, and proposes a simple approach based on deep neural networks that achieves 45% relative improvement with respect to a competitive Hidden Markov Model-based system.
Nengo: a Python tool for building large-scale functional brain models
TLDR
Nengo 2.0 is described, which is implemented in Python and uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.