Neuromorphic Data Augmentation for Training Spiking Neural Networks

@inproceedings{Li2022NeuromorphicDA,
  title={Neuromorphic Data Augmentation for Training Spiking Neural Networks},
  author={Yuhang Li and Youngeun Kim and Hyoungseob Park and Tamar Geller and Priyadarshini Panda},
  booktitle={European Conference on Computer Vision},
  year={2022}
}
. Developing neuromorphic intelligence on event-based datasets with Spiking Neural Networks (SNNs) has recently attracted much research attention. However, the limited size of event-based datasets makes SNNs prone to overfitting and unstable convergence. This issue remains unexplored by previous academic works. In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with… 

Figures and Tables from this paper

Training Robust Spiking Neural Networks on Neuromorphic Data with Spatiotemporal Fragments

A novel Event SpatioTemporal Fragments (ESTF) augmentation method that preserves the continuity of neuromorphic data by drifting or inverting fragments of the spatiotemporal event stream to simulate the disturbance of brightness variations, leading to more robust spiking neural networks.

Spikeformer: A Novel Architecture for Training High-Performance Low-Latency Spiking Neural Network

A novel Transformer-based SNN, termed “Spikeformer” is proposed, which outperforms its ANN counterpart on both static dataset and neuromorphic dataset and may be an alternative archi- tecture to CNN for training high-performance SNNs.

Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting

The temporal efficient training (TET) approach is introduced to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability.

Hoyer regularizer is all you need for ultra low-latency spiking neural networks

This work presents a training framework (from scratch) for onetime-step SNNs that uses a novel variant of the recently proposed Hoyer regularizer that outperforms existing spiking, binary, and adder neural networks in terms of the accuracy-FLOPs trade-off for complex image recognition tasks.

Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching

A novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS) is proposed, which improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints.

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

This paper proposes the Spatial Learning Through Time (SLTT) method, which achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.

Training Stronger Spiking Neural Networks with Biomimetic Adaptive Internal Association Neurons

This paper proposes a novel Adaptive Internal Association(AIA) neuron model, which is adaptive to input stimuli, and internal associative learning occurs only when both dendrites are stimulated at the same time, and achieves state-of-the-art performance on DVS-CIFAR10 and N-CARS datasets.

Data Augmentation in Temporal and Polar Domains for Event-Based Learning

This simulation improves generalization by increasing the robustness of models against brightness variations against event properties, and is broadly effective, surpassing previous state-of-the-art performances.

Spikformer: When Spiking Neural Network Meets Transformer

This work drops the complex operation of softmax in SSA, and performs matrix dot-product directly on spike-form Query, Key, and Value, which iscient and avoids multiplications and makes Spikformer work surprisingly well on both static and neuromorphic datasets.

EventMix: An Efficient Augmentation Strategy for Event-Based Data

This paper carefully design the mixing of different event streams by Gaussian Mixture Model to generate random 3D masks and achieve arbitrary shape mixing of event streams in the spatio-temporal dimension, and proposes a more reasonable way to assign labels to the mixed samples.

References

SHOWING 1-10 OF 71 REFERENCES

Direct Training for Spiking Neural Networks: Faster, Larger, Better

This work proposes a neuron normalization technique to adjust the neural selectivity and develops a direct learning algorithm for deep SNNs and presents a Pytorch-based implementation method towards the training of large-scale Snns.

Going Deeper With Directly-Trained Larger Spiking Neural Networks

A threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed “STBP-tdBN”, enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware is proposed.

Revisiting Batch Normalization for Training Low-Latency Deep Spiking Neural Networks From Scratch

A temporal Batch Normalization Through Time (BNTT) technique is proposed and it is found that varying the BN parameters at every time-step allows the model to learn the time-varying input distribution better.

Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting

The temporal efficient training (TET) approach is introduced to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability.

Converting Artificial Neural Networks to Spiking Neural Networks via Parameter Calibration

It is argued that simply copying and pasting the weights of ANN to SNN inevitably results in activation mismatch, especially for ANNs that are trained with batch normalization (BN) layers, and a set of layer-wise parameter calibration algorithms are proposed, which adjusts the parameters to minimize the activation mismatch.

Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch.

Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks

A novel theory is provided that explains why traditional CNNs can be converted into deep spiking neural networks (SNNs), and several new tools are derived to convert a larger and more powerful class of deep networks into SNNs.

Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks

A novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation.

RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks

This work attempts to rectify the membrane potential distribution (MPD) by designing a novel distribution loss, MPD-Loss, which can explicitly penalize the un-desired shifts without introducing any additional operations in the inference phase, and can directly train a deeper, larger, and better-performing SNN within fewer timesteps.
...