Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks
@article{Bukatin2016DataflowMM, title={Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks}, author={Michael A. Bukatin and Steve Matthews and Andrey Radul}, journal={ArXiv}, year={2016}, volume={abs/1605.05296} }
Dataflow matrix machines are a powerful generalization of recurrent neural networks. They work with multiple types of linear streams and multiple types of neurons, including higher-order neurons which dynamically update the matrix describing weights and topology of the network in question while the network is running. It seems that the power of dataflow matrix machines is sufficient for them to be a convenient general purpose programming platform. This paper explores a number of useful…
4 Citations
Programming Patterns in Dataflow Matrix Machines and Generalized Recurrent Neural Nets
- Computer ScienceArXiv
- 2016
This paper explores a variety of programming patterns in dataflow matrix machines that correspond to patterns of connectivity in the generalized recurrent neural networks understood as programs.
Notes on Pure Dataflow Matrix Machines: Programming with Self-referential Matrix Transformations
- Computer ScienceArXiv
- 2016
A discipline of programming with only one kind of streams, namely the streams of appropriately shaped matrices capable of defining a pure dataflow matrix machine is proposed.
Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets
- Computer ScienceArXiv
- 2017
A compact and streamlined version of dataflow matrix machines based on a single space of vector-like elements and variadic neurons, and elements of these spaces V-values are called, which are sufficiently expressive to cover all cases of interest currently aware of.
Dataflow Matrix Machines as a Model of Computations with Linear Streams
- Computer ScienceArXiv
- 2017
We overview dataflow matrix machines as a Turing complete generalization of recurrent neural networks and as a programming platform. We describe vector space of finite prefix trees with numerical…
8 References
Symbolic Processing in Neural Networks
- Computer ScienceJ. Braz. Comput. Soc.
- 2003
It is shown how to use resource bounds to speed up computations over neural nets, through suitable data type coding like in the usual programming languages.
Foundations of recurrent neural networks
- Computer Science
- 1993
This dissertation focuses on the "recurrent network" model, in which the underlying graph is not subject to any constraints, and establishes a precise correspondence between the mathematical and computing choices.
A ‘Self-Referential’ Weight Matrix
- Computer Science
- 1993
An initial gradientbased sequence learning algorithm is derived for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations and is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters.
Learning to Learn Using Gradient Descent
- Computer ScienceICANN
- 2001
This paper makes meta- learning in large systems feasible by using recurrent neural networks with attendant learning routines as meta-learning systems and shows that the approach to gradient descent methods forms non-stationary time series prediction.
Linear Models of Computation and Program Learning
- Computer ScienceGCAI
- 2015
We consider two classes of computations which admit taking linear combinations of execution runs: probabilistic sampling and generalized animation. We argue that the task of program learning should…
Designing Sound
- Education
- 2008
The thesis is that any sound can be generated from first principles, guided by analysis and synthesis, and readers use the Pure Data (Pd) language to construct sound objects, which are more flexible and useful than recordings.
Advances in dataflow programming languages
- Computer ScienceCSUR
- 2004
How dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach is discussed.
Neural Programmer-Interprete rs
- Preprint (2015),
- 2015