• Corpus ID: 239998734

Enhancing Reinforcement Learning with discrete interfaces to learn the Dyck Language

@article{Dietz2021EnhancingRL,
  title={Enhancing Reinforcement Learning with discrete interfaces to learn the Dyck Language},
  author={Florian Dietz and Dietrich Klakow},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14350}
}
Even though most interfaces in the real world are discrete, no efficient way exists to train neural networks to make use of them, yet. We enhance an Interaction Network (a Reinforcement Learning architecture) with discrete interfaces and train it on the generalized Dyck language. This task requires an understanding of hierarchical structures to solve, and has long proven difficult for neural networks. We provide the first solution based on learning to use discrete data structures. We… 

Figures from this paper

References

SHOWING 1-10 OF 17 REFERENCES
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
TLDR
This work provides the first demonstration of neural networks recognizing the generalized Dyck languages, which express the core of what it means to be a language with hierarchical structure.
Hybrid computing using a neural network with dynamic external memory
TLDR
A machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer.
Curriculum learning
TLDR
It is hypothesized that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions).
Interaction Networks: Using a Reinforcement Learner to train other Machine Learning algorithms
TLDR
In this paper, thought experiments are used to explore how the additional abilities of Interaction Networks could be used to improve various existing types of neural networks.
Learning to Transduce with Unbounded Memory
TLDR
This paper proposes new memory-based recurrent networks that implement continuously differentiable analogues of traditional data structures such as Stacks, Queues, and DeQues and shows that these architectures exhibit superior generalisation performance to Deep RNNs and are often able to learn the underlying generating algorithms in the transduction experiments.
Reinforcement Learning Neural Turing Machines - Revised
TLDR
This work examines feasibility of learning models to interact with discrete Interfaces, and uses a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks.
PathNet: Evolution Channels Gradient Descent in Super Neural Networks
TLDR
Successful transfer learning is demonstrated; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning.
End-To-End Memory Networks
TLDR
A neural network with a recurrent attention model over a possibly large external memory that is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings.
Evaluating the Ability of LSTMs to Learn Context-Free Grammars
TLDR
It is concluded that LSTMs do not learn the relevant underlying context-free rules, suggesting the good overall performance is attained rather by an efficient way of evaluating nuisance variables.
Learning Compositional Rules via Neural Program Synthesis
TLDR
This work presents a neuro-symbolic model which learns entire rule systems from a small set of examples, and outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages.
...
1
2
...