• Corpus ID: 9682622

Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks

  title={Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks},
  author={Michael A. Bukatin and Steve Matthews and Andrey Radul},
Dataflow matrix machines are a powerful generalization of recurrent neural networks. They work with multiple types of linear streams and multiple types of neurons, including higher-order neurons which dynamically update the matrix describing weights and topology of the network in question while the network is running. It seems that the power of dataflow matrix machines is sufficient for them to be a convenient general purpose programming platform. This paper explores a number of useful… 

Programming Patterns in Dataflow Matrix Machines and Generalized Recurrent Neural Nets

This paper explores a variety of programming patterns in dataflow matrix machines that correspond to patterns of connectivity in the generalized recurrent neural networks understood as programs.

Notes on Pure Dataflow Matrix Machines: Programming with Self-referential Matrix Transformations

A discipline of programming with only one kind of streams, namely the streams of appropriately shaped matrices capable of defining a pure dataflow matrix machine is proposed.

Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets

A compact and streamlined version of dataflow matrix machines based on a single space of vector-like elements and variadic neurons, and elements of these spaces V-values are called, which are sufficiently expressive to cover all cases of interest currently aware of.

Dataflow Matrix Machines as a Model of Computations with Linear Streams

We overview dataflow matrix machines as a Turing complete generalization of recurrent neural networks and as a programming platform. We describe vector space of finite prefix trees with numerical

Symbolic Processing in Neural Networks

It is shown how to use resource bounds to speed up computations over neural nets, through suitable data type coding like in the usual programming languages.

Foundations of recurrent neural networks

This dissertation focuses on the "recurrent network" model, in which the underlying graph is not subject to any constraints, and establishes a precise correspondence between the mathematical and computing choices.

A ‘Self-Referential’ Weight Matrix

An initial gradientbased sequence learning algorithm is derived for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations and is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters.

Learning to Learn Using Gradient Descent

This paper makes meta- learning in large systems feasible by using recurrent neural networks with attendant learning routines as meta-learning systems and shows that the approach to gradient descent methods forms non-stationary time series prediction.

Linear Models of Computation and Program Learning

We consider two classes of computations which admit taking linear combinations of execution runs: probabilistic sampling and generalized animation. We argue that the task of program learning should

Designing Sound

The thesis is that any sound can be generated from first principles, guided by analysis and synthesis, and readers use the Pure Data (Pd) language to construct sound objects, which are more flexible and useful than recordings.

Advances in dataflow programming languages

How dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach is discussed.

Neural Programmer-Interprete rs

  • Preprint (2015),
  • 2015