Corpus ID: 226976144

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

@article{Fang2020IncorporatingLM,
  title={Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks},
  author={Wei Fang and Zhaofei Yu and Yanqi Chen and Timoth{\'e}e Masquelier and Tiejun Huang and Yonghong Tian},
  journal={arXiv: Neural and Evolutionary Computing},
  year={2020}
}
Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient and high-performance learning algorithms for SNNs is still challenging. Most existing learning methods learn the synaptic-related parameters only, and require manual tuning of the membrane-related parameters that determine the dynamics of single spiking neurons. These parameters… Expand
Keys to Accurate Feature Extraction Using Residual Spiking Neural Networks
TLDR
This paper designs a spiking version of the successful residual network (ResNet) architecture and test different components and training strategies on it, which provides a state of the art guide to SNN design, which allows to make informed choices when trying to build the optimal visual feature extractor. Expand
Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks
TLDR
The method enables the training of sparse spiking convolutional neural networks directly on event data, using the popular deep learning framework PyTorch, and the performances in terms of accuracy, sparsity and training time make it possible to use this bio-inspired approach for the future embedding of real-time applications on low-power neuromorphic hardware. Expand
Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural Networks
TLDR
This article focuses on the self-supervised learning problem of optical flow estimation from event-based camera inputs, and investigates the changes that are necessary to the state-of-the-art ANN training pipeline in order to successfully tackle it with SNNs. Expand
Spiking neural networks trained via proxy
TLDR
A new learning algorithm to train spiking neural networks using conventional artificial neural networks (ANN) as proxy by assuming IF neuron with rate-coding as an approximation of ReLU to backpropagate the error of the SNN in the proxy ANN to update the shared weights. Expand
Deep Residual Learning in Spiking Neural Networks
TLDR
It is proved that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems of Spiking ResNet, and the first time that directly training deep SNNs with more than 100 layers becomes possible. Expand
A Scatter-and-Gather Spiking Convolutional Neural Network on a Reconfigurable Neuromorphic Hardware
  • Chenglong Zou, Xiaoxin Cui, +4 authors Ru Huang
  • Frontiers in Neuroscience
  • 2021
Artificial neural networks (ANNs), like convolutional neural networks (CNNs), have achieved the state-of-the-art results for many machine learning tasks. However, inference with large-scaleExpand
Backpropagation with Biologically Plausible Spatio-Temporal Adjustment For Training Deep Spiking Neural Networks
  • Guobin Shen, Dongcheng Zhao, Yi Zeng
  • Computer Science
  • ArXiv
  • 2021
TLDR
A biological plausible spatial adjustment, which rethinks the relationship between membrane potential and spikes and realizes a reasonable adjustment of gradients to different time steps, and precisely controls the backpropagation of the error along the spatial dimension overcomes the problem of the temporal dependency within a single spike period of the traditional spiking neurons. Expand
Encrypted Internet traffic classification using a supervised Spiking Neural Network
TLDR
Results show that SNNs are an excellent fit for encrypted internet traffic classification: they can be more accurate than conventional artificial neural networks (ANN), and they could be implemented efficiently on low power embedded systems. Expand
Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain
TLDR
This study assesses if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations, and compares N-MNIST and DvsGesture on two STDP algorithms that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. Expand
Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems
TLDR
The study presented in this paper reveals the design space created by the choice of each coding scheme, allowing designers to frame each scheme in terms of its strength and weakness given a designs’ constraints and considerations in neuromorphic systems. Expand
...
1
2
...

References

SHOWING 1-10 OF 82 REFERENCES
Effective sensor fusion with event-based sensors and deep network architectures
TLDR
Several methods for preprocessing the spiking data from these sensors for use with various deep network architectures including a deep fusion network composed of Convolutional Neural Networks and Recurrent Neural Networks are discussed. Expand
LISNN: Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition
TLDR
This work model the lateral interactions between spatially adjacent neurons and integrate it into the spiking neuron membrane potential formula, then build a multi-layer SNN on a popular deep learning framework, i. Expand
RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network
TLDR
It is found that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference. Expand
Direct Training for Spiking Neural Networks: Faster, Larger, Better
TLDR
This work proposes a neuron normalization technique to adjust the neural selectivity and develops a direct learning algorithm for deep SNNs and presents a Pytorch-based implementation method towards the training of large-scale Snns. Expand
Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks
TLDR
The proposed ST-RSBP algorithm is able to train RSNNs with an accuracy surpassing that of the current state-of-art SNN BP algorithms and conventional non-spiking deep learning models. Expand
A Low Power, Fully Event-Based Gesture Recognition System
  • A. Amir, Brian Taba, +13 authors D. Modha
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power fromExpand
CIFAR10-DVS: An Event-Stream Dataset for Object Classification
TLDR
This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification, based on state-of-the-art classification algorithms. Expand
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification
TLDR
This paper shows conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. Expand
Fashionmnist: a novel image dataset for benchmarking machine learning algorithms
  • arXiv preprint arXiv:1708.07747,
  • 2017
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
TLDR
This work proposes a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform, and presents conversion of two popular image datasets which have played important roles in the development of Computer Vision. Expand
...
1
2
3
4
5
...