Loihi: A Neuromorphic Manycore Processor with On-Chip Learning

@article{Davies2018LoihiAN,
  title={Loihi: A Neuromorphic Manycore Processor with On-Chip Learning},
  author={Mike E. Davies and Narayan Srinivasa and Tsung-Han Lin and Gautham N. Chinya and Yongqiang Cao and Sri Harsha Choday and Georgios D. Dimou and Prasad Joshi and Nabil Imam and Shweta Jain and Yuyun Liao and Chit-Kwan Lin and Andrew Lines and Ruokun Liu and Deepak A. Mathaikutty and Steve McCoy and Arnab Paul and Jonathan Tse and Guruguhanathan Venkataramanan and Yi-Hsin Weng and Andreas Wild and Yoonseok Yang and Hong Wang},
  journal={IEEE Micro},
  year={2018},
  volume={38},
  pages={82-99}
}
Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. [] Key Result This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.

Figures and Tables from this paper

Mapping high-performance RNNs to in-memory neuromorphic chips

A new adaptive spiking neuron model that can be abstracted as a low-pass filter that enables faster and better training of spiking networks using back-propagation, without simulating spikes is proposed.

A Digital Neuromorphic Hardware for Spiking Neural Network

A digital neuromorphic core with 1024 neurons, 1024 axons and a $1024\times 1024$ synaptic crossbar is designed, and the scalable network could be implemented based on the 2D mesh network on chip (NOC) architecture.

A Mixed-Signal Structured AdEx Neuron for Accelerated Neuromorphic Cores

A multicompartment neuron circuit based on the adaptive-exponential I&F (AdEx) model, developed for the second-generation BrainScaleS hardware, which reproduces a diverse set of firing patterns observed in cortical pyramidal neurons.

PeleNet: A Reservoir Computing Framework for Loihi

The PeleNet framework aims to simplify reservoir computing for the neuromorphic hardware Loihi by providing an automatic and efficient distribution of networks over several cores and chips.

In-Hardware Learning of Multilayer Spiking Neural Networks on a Neuromorphic Processor

This work presents a spike-based backpropagation algorithm with biological plausible local update rules and adapts it to fit the constraint in a neuromorphic hardware enabling low power in-hardware supervised online learning of multilayered SNNs for mobile applications.

MorphBungee: An Edge Neuromorphic Chip for High-Accuracy On-Chip Learning of Multiple-Layer Spiking Neural Networks

This work presents a digital edge neuromorphic chip for real-time high-accuracy on-chip multi-layer SNN learning in visual recognition tasks that employs a hierarchical multi-core architecture, a dynamically reconfigurable array parallelism and a quasi-event-driven scheme to improve processing speed.

Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian

This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use LoihI prototyping package with the aim of help streamline conceptualization and deployment of new algorithms.

The SpiNNaker 2 Processing Element Architecture for Hybrid Digital Neuromorphic Computing

This paper introduces the processing element architecture of the second generation SpiNNaker chip, implemented in 22nm FDSOI, and presents three benchmarks showing operation of the whole processor element on SNN, DNN and hybrid SNN/DNN networks.

Programming Spiking Neural Networks on Intel’s Loihi

The authors present the Loihi toolchain, which consists of an intuitive Python-based API for specifying SNNs, a compiler and runtime for building and executing SNN’s on LoihI, and several target platforms (Loihi silicon, FPGA, and functional simulator).

Porting Deep Spiking Q-Networks to neuromorphic chip Loihi

It is found that spiking neural networks have slightly decreased performance compared to non-spiking network, but they can avoid performance degradation from quantization and in-chip implementation, and neuromorphic approach is a promising avenue for deep Q-learning.
...

References

SHOWING 1-10 OF 26 REFERENCES

A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses

This paper presents a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems.

A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons

A new architecture is proposed to overcome scalable learning algorithms for networks of spiking neurons in silicon by combining innovations in computation, memory, and communication to leverage robust digital neuron circuits and novel transposable SRAM arrays.

A million spiking-neuron integrated circuit with a scalable communication network and interface

Inspired by the brain’s structure, an efficient, scalable, and flexible non–von Neumann architecture is developed that leverages contemporary silicon technology and is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification.

Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results

This paper forms a mathematical model of one SNN that can be configured for a sparse coding problem for feature extraction and proves that the SNN indeed solves sparse coding, the first rigorous result of this kind.

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

An event-driven random backpropagation (eRBP) rule is demonstrated that uses an error-modulated synaptic plasticity rule for learning deep representations in neuromorphic computing hardware, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting

A model of supervised learning for biologically plausible neurons is presented that enables spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise and shows that the learning rule can also be used for decision-making tasks.

Polychronization: Computation with Spikes

We present a minimal spiking network that can polychronize, that is, exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network

Local Information with Feedback Perturbation Suffices for Dictionary Learning in Neural Circuits

This work describes a neural network with spiking neurons that can address the fundamental challenge and solve the L1-minimizing dictionary learning problem, representing the first model able to do so.

Optimal Sparse Approximation with Integrate and Fire Neurons

It is shown that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time, and that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.

Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time.