• Corpus ID: 239998726

BioGrad: Biologically Plausible Gradient-Based Learning for Spiking Neural Networks

@article{Tang2021BioGradBP,
  title={BioGrad: Biologically Plausible Gradient-Based Learning for Spiking Neural Networks},
  author={Guangzhi Tang and Neelesh Kumar and Ioannis E. Polykretis and Konstantinos P. Michmizos},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14092}
}
Spiking neural networks (SNN) have started to deliver energy-efficient, massively parallel, and low-latency solutions to AI problems, facilitated by the emerging neuromorphic hardware. To harness these computational benefits, SNN need to be trained by learning algorithms that adhere to braininspired neuromorphic principles, namely event-based, local, and online computations. However, the state-of-the-art SNN training algorithms are based on backpropagation that does not follow the above… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 45 REFERENCES
Approximating Back-propagation for a Biologically Plausible Local Learning Rule in Spiking Neural Networks
TLDR
This work proposes an approximation of the backpropagation algorithm completely with spiking neurons and extends it to a local weight update rule which resembles a biologically plausible learning rule spike-timing-dependent plasticity (STDP) and test the proposed algorithm on various traditional and non-traditional benchmarks with competitive results.
BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity
TLDR
This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons, which enjoys benefits of both accurate gradient descent and temporally local, efficient STDP.
GLSNN: A Multi-Layer Spiking Neural Network Based on Global Feedback Alignment and Local STDP Plasticity
TLDR
This work gives an alternative method to train SNNs by biologically-plausible structural and functional inspirations from the brain, inspired by the significant top-down structural connections, and a differential STDP is used to optimize local plasticity.
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
TLDR
A population-coded spiking actor network (PopSAN) trained in conjunction with a deep critic network using deep reinforcement learning (DRL) is proposed, which supports the efficiency of neuromorphic controllers and suggests the hybrid RL as an alternative to deep learning, when both energy-efficiency and robustness are important.
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
TLDR
An event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook
TLDR
This survey reviews results that are obtained to date with Loihi across the major algorithmic domains under study, including deep learning approaches and novel approaches that aim to more directly harness the key features of spike-based neuromorphic hardware.
A solution to the learning dilemma for recurrent networks of spiking neurons
TLDR
A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning.
Spiking Neural Network on Neuromorphic Hardware for Energy-Efficient Unidimensional SLAM
TLDR
A brain-inspired spiking neural network (SNN) architecture that solves the unidimensional SLAM by introducing spike-based reference frame transformation, visual likelihood computation, and Bayesian inference is proposed.
Convolutional networks for fast, energy-efficient neuromorphic computing
TLDR
This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
SLAYER: Spike Layer Error Reassignment in Time
TLDR
A new general back Propagation mechanism for learning synaptic weights and axonal delays which overcomes the problem of non-differentiability of the spike function and uses a temporal credit assignment policy for backpropagating error to preceding layers is introduced.
...
1
2
3
4
5
...