Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System

  title={Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System},
  author={Kaitlin L. Fair and Daniel R. Mendat and Andreas G. Andreou and Christopher J. Rozell and Justin K. Romberg and David V. Anderson},
  journal={Frontiers in Neuroscience},
The Locally Competitive Algorithm (LCA) is a biologically plausible computational architecture for sparse coding, where a signal is represented as a linear combination of elements from an over-complete dictionary. In this paper we map the LCA algorithm on the brain-inspired, IBM TrueNorth Neurosynaptic System. We discuss data structures and representation as well as the architecture of functional processing units that perform non-linear threshold, vector-matrix multiplication. We also present… 
Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook
This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware.
Neural Mini-Apps as a Tool for Neuromorphic Computing Insight
This work proposes following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures, and presents Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.
FPGA Based Emulation Environment for Neuromorphic Architectures
The proposed parameterized and configurable emulation platform serves as a basis for expanding its features to support emerging architectures, studying hypothetical neuromorphic architectures, or rapidly converging to hardware configuration through incremental changes based on bottlenecks as they become apparent during application mapping process.
RANC: Reconfigurable Architecture for Neuromorphic Computing
This work presents RANC: a reconfigurable architecture for neuromorphic computing, an opensource highly flexible ecosystem that enables rapid experimentation with neuromorphic architectures in both software via C++ simulation and hardware via FPGA emulation, and demonstrates the highly parameterized and configurable nature of RANC.


Implementation of the Neural Engineering Framework on the TrueNorth Neurosynaptic System
An implementation of the Neural Engineering Framework on IBM's TrueNorth Neurosynaptic system, where the crossbar array architecture itself, utilized in the TrueNorth hardware, can be used to compute the basic NEF calculations for any sized neural population, representing any dimensionality.
Sparse Coding via Thresholding and Local Competition in Neural Circuits
A locally competitive algorithm (LCA) is described that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function to produce coefficients with sparsity levels comparable to the most popular centralized sparse coding algorithms while being readily suited for neural implementation.
Configurable hardware integrate and fire neurons for sparse approximation
Sparse coding with memristor networks.
The experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements.
Compass: A scalable simulator for an architecture for cognitive computing
  • Robert Preissl, T. Wong, D. Modha
  • Biology
    2012 International Conference for High Performance Computing, Networking, Storage and Analysis
  • 2012
Inspired by the function, power, and volume of the organic brain, we are developing TrueNorth, a novel modular, non-von Neumann, ultra-low power, compact architecture. TrueNorth consists of a
Optimal Sparse Approximation with Integrate and Fire Neurons
It is shown that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time, and that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
An Analog Neural Network Inspired by Fractal Block Coding
A decompression algorithm for fractal block codes is sketched out, a recurrent neural network is implemented using physically simple but highly-nonlinear, analog circuit models of neurons and synapses is implemented, and a partial proof of the concept is presented.
Braindrop: A Mixed-Signal Neuromorphic Architecture With a Dynamical Systems-Based Programming Model
Two innovations—sparse encoding through analog spatial convolution and weighted spike-rate summation though digital accumulative thinning—cut digital traffic drastically, reducing the energy Braindrop consumes per equivalent synaptic operation to 381 fJ for typical network configurations.
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
A simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates is developed.
Real-time sensory information processing using the TrueNorth Neurosynaptic System
Results from ongoing experimental work on real-time sensory information processing using the TN architecture are discussed in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing-speech processing and recognition(iii) natural language processing -word similarity-.