Lattice map spiking neural networks (LM-SNNs) for clustering and classifying image data

@article{Hazan2019LatticeMS,
  title={Lattice map spiking neural networks (LM-SNNs) for clustering and classifying image data},
  author={Hananel Hazan and Daniel J. Saunders and Darpan T. Sanghavi and Hava T. Siegelmann and Robert Thijs Kozma},
  journal={Annals of Mathematics and Artificial Intelligence},
  year={2019},
  volume={88},
  pages={1237 - 1260}
}
Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interactions. Several inhibition strategies are developed and tested, such as (i) incrementally increasing inhibition level over the course of network… 

Making a Spiking Net Work: Robust brain-like unsupervised machine learning

This work shows how an SNN can overcome many of the shortcomings that have been identified in the literature, including offering a principled solution to the dynamical “vanishing spike problem”, to outperform all existing shallow SNNs and equal the performance of an ANN.

On the Self-Repair Role of Astrocytes in STDP Enabled Unsupervised SNNs

The degree of self-repair that can be enabled in such networks with varying degree of faults ranging from 50 to 90% is characterized and the proposal is evaluated on the MNIST and Fashion-MNIST datasets.

Spiking Neural Networks and Their Applications: A Review

A comprehensive review of theories of biological neurons, which have been studied in neuroscience, is given and existing spiking neural network applications in computer vision and robotics domains are covered.

lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents

The experimental results show that the lpSpikeCon methodology can reduce weight memory of the SNN model by 8x and achieve no accuracy loss in the inference phase, as compared to the baseline model with 32-bit weights across different network sizes.

FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks.

FSpiNN is an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy, by reducing the computational requirements of neuronal and STDP operations, improving the accuracy of STDP-based learning, compressing the SNN through a fixed-point quantization, and incorporating the memory and energy requirements in the optimization process.

Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

The Q-SpiNN is proposed, a novel quantization framework for memory-efficient SNNs that employs quantization for different SNN parameters based on their significance to the accuracy, and develops an algorithm that quantifies the benefit of the memory-accuracy trade-off obtained by the candidates, and selects the Pareto-optimal one.

TripleBrain: A Compact Neuromorphic Hardware Core With Fast On-Chip Self-Organizing and Reinforcement Spike-Timing Dependent Plasticity

The TripleBrain hardware core is proposed, that tightly combines three common brain-inspired factors: the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs.

Memory via Temporal Delays in weightless Spiking Neural Network

A prototype for weightless spiking neural networks that can perform a simple classification task and is trained using a Hebbian Spike Timing Dependent Plasticity (STDP), which modulates the delays of the connection.

tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks

The tinySNN effectively compresses the given SNN model to achieve high accuracy in a memory- and energy-efficient manner, hence enabling the employment of SNNs for the resource-and energy-constrained embedded applications.

A Spiking Neural Network Based Auto-encoder for Anomaly Detection in Streaming Data

It is shown that SNNs are well suited for detecting anomalous character sequences, that they can learn rapidly, and that there are many optimizations to the SNN architecture and training that can improve AD performance.

References

SHOWING 1-10 OF 52 REFERENCES

Unsupervised Learning with Self-Organizing Spiking Neural Networks

A hybridization of self-organized map properties with spiking neural networks that retain many of the features of SOMs is presented, and using the optimal choice of parameters, this approach produces improvements over state-of-art spiking Neural networks.

A Spiking Self-Organizing Map Combining STDP, Oscillations, and Continuous Learning

A network of integrate-and-fire neurons is presented that incorporates solutions to each of these issues through the neuron model and network structure, thereby representing a significant step toward further understanding of the self-organizational properties of the brain.

Self-Organization in Networks of Spiking Neurons

A self-organizing neural network based on a spiking neuron model is described, which shows how this network of laterally connected spiking neuronsSelf-organizes into a topological map in response to external stimulation.

STDP-based spiking deep convolutional neural networks for object recognition

The results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption.

Computation with Spikes in a Winner-Take-All Network

This work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval, and uses a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events.

Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing

The method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time.

Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

This paper shows conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset.

Training Deep Spiking Neural Networks Using Backpropagation

A novel technique is introduced, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise, which enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membranes potentials.

Unsupervised learning of digit recognition using spike-timing-dependent plasticity

A SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold is presented.

Gradient Descent for Spiking Neural Networks

A gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.
...