• Corpus ID: 44443483

Dataflow Matrix Machines as a Model of Computations with Linear Streams

@article{Bukatin2017DataflowMM,
  title={Dataflow Matrix Machines as a Model of Computations with Linear Streams},
  author={Michael A. Bukatin and Jon Anthony},
  journal={ArXiv},
  year={2017},
  volume={abs/1706.00648}
}
We overview dataflow matrix machines as a Turing complete generalization of recurrent neural networks and as a programming platform. We describe vector space of finite prefix trees with numerical leaves which allows us to combine expressive power of dataflow matrix machines with simplicity of traditional recurrent neural networks. 

Figures from this paper

Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets

A compact and streamlined version of dataflow matrix machines based on a single space of vector-like elements and variadic neurons, and elements of these spaces V-values are called, which are sufficiently expressive to cover all cases of interest currently aware of.

References

SHOWING 1-10 OF 17 REFERENCES

Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks

A number of useful programming idioms and constructions arising from dataflow matrix machines, a powerful generalization of recurrent neural networks, are explored.

Notes on Pure Dataflow Matrix Machines: Programming with Self-referential Matrix Transformations

A discipline of programming with only one kind of streams, namely the streams of appropriately shaped matrices capable of defining a pure dataflow matrix machine is proposed.

Neural Turing Machines

A combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent.

On the computational power of neural nets

It is proved that one may simulate all Turing Machines by rational nets in linear time, and there is a net made up of about 1,000 processors which computes a universal partial-recursive function.

Memory Networks

This work describes a new class of learning models called memory networks, which reason with inference components combined with a long-term memory component; they learn how to use these jointly.

Differentiable Functional Program Interpreters

This work study modeling choices that arise when constructing a differentiable programming language and their impact on the success of synthesis, and shows that incorporating functional programming ideas into differentiable Programming languages allows us to learn much more complex programs than is possible with existing differentiable languages.

Evolving Deep Neural Networks

A ‘Self-Referential’ Weight Matrix

An initial gradientbased sequence learning algorithm is derived for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations and is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

Successful transfer learning is demonstrated; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning.

Bayesian Sketch Learning for Program Synthesis

A Bayesian statistical approach to the problem of automatic program synthesis that explicitly models the full intent behind a synthesis task as a latent variable and can be implemented effectively using the new neural architecture of Bayesian encoder-decoders.