Bridge Networks: Relating Inputs through Vector-Symbolic Manipulations

  title={Bridge Networks: Relating Inputs through Vector-Symbolic Manipulations},
  author={Wilkie Olin-Ammentorp and Maxim Bazhenov},
  journal={International Conference on Neuromorphic Systems 2021},
Despite rapid progress, current deep learning methods face a number of critical challenges. These include high energy consumption, catastrophic forgetting, dependance on global losses, and an inability to reason symbolically. By combining concepts from information theory and vector-symbolic architectures, we propose and implement a novel information processing architecture, the ‘Bridge network.’ We show this architecture provides unique advantages which can address the problem of global losses… Expand
1 Citations

Figures and Tables from this paper

Deep Phasor Networks: Connecting Conventional and Spiking Neural Networks
This work extends standard neural networks by building upon an assumption that neuronal activations correspond to the angle of a complex number lying on the unit circle, or ‘phasor,’ and demonstrates the atemporal training of a phasor network on standard deep learning tasks and shows that these networks can be executed in either the traditional atem temporal domain or spiking temporal domain with no conversion step needed. Expand


Measuring Catastrophic Forgetting in Neural Networks
New metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding are introduced. Expand
Brain-inspired replay for continual learning with artificial neural networks
A replay-based algorithm for deep learning without the need to store data is proposed in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections, and it provides a novel model for replay in the brain. Expand
Deep Learning for Classical Japanese Literature
This work introduces Kuz Kushiji-MNIST, a dataset which focuses on Kuzushiji (cursive Japanese), as well as two larger, more challenging datasets, KuzUSHiji-49 and Kuzushaiji-Kanji, which are intended to engage the machine learning community into the world of classical Japanese literature. Expand
Continual Lifelong Learning with Neural Networks: A Review
This review critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Expand
Resonator networks for factoring distributed representations of data structures
This work proposes an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple code vectors through a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. Expand
Neurosymbolic AI: The 3rd Wave
The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. Expand
Soft-Label Dataset Distillation and Text Dataset Distillation
This work proposes to simultaneously distill both images and their labels, thus assigning each synthetic sample a `soft' label (a distribution of labels) and demonstrates that text distillation outperforms other methods across multiple datasets. Expand
'Less Than One'-Shot Learning: Learning N Classes From M
A soft-label generalization of the k-Nearest Neighbors classifier is used to explore the intricate decision landscapes that can be created in the `less than one'-shot learning setting and derive theoretical lower bounds for separating classes using hard-label samples. Expand
Robust computation with rhythmic spike patterns
The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices. Expand
Holographic reduced representations
  • T. Plate
  • Computer Science, Medicine
  • IEEE Trans. Neural Networks
  • 1995
This paper describes a method for representing more complex compositional structure in distributed representations that uses circular convolution to associate items, which are represented by vectors. Expand