A modular architecture for transparent computation in Recurrent Neural Networks

@article{Carmantini2017AMA,
  title={A modular architecture for transparent computation in Recurrent Neural Networks},
  author={Giovanni Sirio Carmantini and Peter beim Graben and Mathieu Desroches and Serafim Rodrigues},
  journal={Neural networks : the official journal of the International Neural Network Society},
  year={2017},
  volume={85},
  pages={
          85-105
        }
}
Feynman Machine: The Universal Dynamical Systems Computer
TLDR
It is demonstrated that networks and hierarchies of simple interacting Dynamical Systems, each adaptively learning to forecast its evolution, are capable of automatically building sensorimotor models of the external and internal world.
Incremental parsing in a continuous dynamical system: sentence processing in Gradient Symbolic Computation
TLDR
A Gradient Symbolic Computation parser is introduced, a continuous-state, continuous-time stochastic dynamical-system model of symbolic processing, which builds up a discrete symbolic structure gradually by dynamically strengthening a discreteness constraint.
Vector symbolic architectures for context-free grammars
TLDR
This work presents a rigorous mathematical framework for the representation of phrase structure trees and parse trees of context-free grammars (CFG) in Fock space, i.e. infinite-dimensional Hilbert space as being used in quantum field theory.
Observability of Automata Networks: Fixed and Switching Cases
TLDR
This brief addresses the observability of Automata networks and switched automata networks in a unified framework, and proposes simple necessary and sufficient conditions for observability.
Improving Neural Models of Language with Input-Output Tensor Contexts
TLDR
The formal properties of tensor contextualization are presented and possible ways to use contexts to represent plausible neural organizations of sequences of words are described, including an illustration of how these contexts generate topographic or thematic organization of data.
A journey in ESN and LSTM visualisations on a language task
TLDR
Despite the deep differences between both models, the ESN was able to outperform LSTMs on datasets more challenging without any further tuning needed and both put emphasis on the units encoding aspects of the sentence structure.
The Landscape of AI Safety and Beneficence Research: Input for Brainstorming at Beneficial AI 2017∗
TLDR
This chapter discusses how to understand theoretically and practically how learned representations of high-level human concepts could be expected to generalize, or fail to do so, in radically new contexts.
Trainable Neural Networks Modelling for a Forecasting of Start-Up Product Development
TLDR
This article proposes to use the trainable neural network as a mechanism for processing big data sets and building IT product development strategies by constructing a model of linear regression implementation using the gradient optimisation approach.
An Accurate PSO-GA Based Neural Network to Model Growth of Carbon Nanotubes
TLDR
The results show that PSOGANN can be successfully utilized for modeling the experimental parameters that are critical for the growth of CNTs.
Methods A Summary of Relevant Results from Applied Mathematics
  • 2016

References

SHOWING 1-10 OF 118 REFERENCES
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
TLDR
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
Turing Computation with Recurrent Artificial Neural Networks
TLDR
This work provides a novel and parsimonious constructive mapping between Turing Machines and Recurrent Artificial Neural Networks, based on recent developments of Nonlinear Dynamical Automata, and provides a framework to directly program the R-ANNs from Turing Machine descriptions, in absence of network training.
The Computational Power of Interactive Recurrent Neural Networks
TLDR
It is proved that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.
Fractal encoding of context‐free grammars in connectionist networks
  • W. Tabor
  • Computer Science
    Expert Syst. J. Knowl. Eng.
  • 2000
TLDR
A widely applicable method of using fractal sets to organize infinite‐state computations in a bounded state space and suggests that such a global perspective on the organization of the parameter space may be helpful for solving the hard problem of getting connectionist networks to learn complex grammars from examples.
The Super-Turing Computational Power of Interactive Evolving Recurrent Neural Networks
TLDR
It is proved that the so-called interactive evolving recurrent neural networks are computationally equivalent to interactive Turing machines with advice, hence capable of super-Turing potentialities and a precise characterisation of the ω-translations realised by these networks.
Syntactic sequencing in Hebbian cell assemblies
TLDR
This work studies the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals to enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules.
Recursion and Recursion- Like Structure in Ensembles of Neural Elements
TLDR
Fractal Learning Neural Networks (FLNNS) are introduced, showing that they can learn some exponential state growth languages with high accuracy and clarifying the relationship between their imperfect, but nevertheless structurally insightful, neural recursive encoding, and the perfect recursive encodings of symbolic devices.
Fractal Analysis Illuminates the Form of Connectionist Structural Gradualness
TLDR
The fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way, and the findings indicate the value of future, linked mathematical and empirical work on these models.
The expressive power of analog recurrent neural networks on infinite input streams
Language Processing by Dynamical Systems
TLDR
It is argued that ERP components are indicators of these bifurcations and proposed an ERP-like measure of the parsing model, which can be modeled by a switching of the control parameter in analogy to phase transitions observed in brain dynamics.
...
...