Relational representation learning with spike trains

@article{Dold2022RelationalRL,
  title={Relational representation learning with spike trains},
  author={Dominik Dold},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.09140}
}
  • D. Dold
  • Published 18 May 2022
  • Computer Science
  • ArXiv
—Relational representation learning has lately re- ceived an increase in interest due to its flexibility in modeling a variety of systems like interacting particles, materials and industrial projects for, e.g., the design of spacecraft. A prominent method for dealing with relational data are knowledge graph embedding algorithms, where entities and relations of a knowledge graph are mapped to a low-dimensional vector space while preserving its semantic structure. Recently, a graph embedding… 
Neuro-symbolic computing with spiking neural networks
TLDR
This work extends previous work on spike-based graph algorithms by demonstrating how symbolic and multi-relational information can be encoded using spiking neurons, allowing reasoning over symbolic structures like knowledge graphs with spiking neural networks.

References

SHOWING 1-10 OF 55 REFERENCES
SpikE: spike-based embeddings for multi-relational graph data
  • D. DoldJ. Garrido
  • Computer Science
    2021 International Joint Conference on Neural Networks (IJCNN)
  • 2021
TLDR
A spike-based algorithm where nodes in a graph are represented by single spike times of neuron populations and relations as spike time differences between populations is proposed, compatible with recently proposed frameworks for training spiking neural networks.
S4NN: temporal backpropagation for spiking neural networks with one spike per neuron
TLDR
This work derives a new learning rule for multilayer spiking neural networks, named S4NN, akin to traditional error backpropagation, yet based on latencies, and shows how approximated error gradients can be computed backward in a feedforward network with any number of layers.
Fast and energy-efficient neuromorphic deep learning with first-spike times
TLDR
A rigorous derivation of a learning rule for such first-spike times in networks of leaky integrate-and-fire neurons, relying solely on input and output spike times are described, and it is shown how this mechanism can implement error backpropagation in hierarchical spiking networks.
Learning Through Structure: Towards Deep Neuromorphic Knowledge Graph Embeddings
TLDR
This work proposes a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures, and compose a frozen neural network with shallow knowledge graph embedding models that leads to a significant speedup and memory reduction while maintaining a competitive performance level.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks
TLDR
SuperSpike is derived, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns.
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
TLDR
For sequential and streaming tasks, this work demonstrates how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity.
Event-based backpropagation can compute exact gradients for spiking neural networks
TLDR
This work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for back Propagation through discrete spike events without approximations.
Temporal Coding in Spiking Neural Networks with Alpha Synaptic Function
TLDR
This work proposes a spiking neural network model that encodes information in the relative timing of individual neuron spikes and performs classification using the first output neuron to spike, and successfully train the network on the MNIST dataset encoded in time.
Biologically Plausible, Human-scale Knowledge Representation
TLDR
It is argued that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition.
Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks
TLDR
This article elucidates step-by-step the problems typically encountered when training SNNs and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting as well as introducing surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.
...
...