Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction
@article{Xiao2018ImprovingTU, title={Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction}, author={Da Xiao and Jonathan Liao and Xingyuan Yuan}, journal={ArXiv}, year={2018}, volume={abs/1802.02696} }
To overcome the limitations of Neural Programmer-Interpreters (NPI) in its universality and learnability, we propose the incorporation of combinator abstraction into neural programing and a new NPI architecture to support this abstraction, which we call Combinatory Neural Programmer-Interpreter (CNPI). Combinator abstraction dramatically reduces the number and complexity of programs that need to be interpreted by the core controller of CNPI, while still allowing the CNPI to represent and…
Figures and Tables from this paper
13 Citations
Learning Compositional Neural Programs with Recursive Tree Search and Planning
- Computer ScienceNeurIPS
- 2019
A novel reinforcement learning algorithm, AlphaNPI, that incorporates the strengths of Neural Programmer-Interpreters (NPI) and AlphaZero and is able to train NPI models effectively with RL for the first time, completely eliminating the need for strong supervision in the form of execution traces.
Human-Neural Net Collaborative Programming
- Computer ScienceASSE
- 2020
A Human-Neural Net Collaborative Programming (HNCP) paradigm is proposed, that integrates the strengths of human's experience and perception with the advantages of neural network's automatic learning from data, in which programmers compose the overall program framework, but let the neural network learn to generate the local trivial dirty detail.
Execution-Guided Neural Program Synthesis
- Computer ScienceICLR
- 2019
This work proposes two simple yet principled techniques to better leverage the semantic information, which are execution-guided synthesis and synthesizer ensemble that are general enough to be combined with any existing encoder-decoder-style neural program synthesizer.
Program Guided Agent
- Computer ScienceICLR
- 2020
Experimental results on a 2D Minecraft environment not only demonstrate that the proposed framework learns to reliably accomplish program instructions and achieves zero-shot generalization to more complex instructions but also verify the efficiency of the proposed modulation mechanism for learning the multitask policy.
Synthetic Datasets for Neural Program Synthesis
- Computer ScienceICLR
- 2019
It is demonstrated, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance, and a new methodology for controlling and evaluating the bias of synthetic data distributions is proposed.
NetSyn: Neural Evolutionary Technique to Synthesize Programs
- Computer ScienceArXiv
- 2019
This work presents a framework, called NetSyn, that synthesizes programs using an evolutionary algorithm that uses neural networks as a fitness function, and compares the proposed approach against a state-of-the-art approach to show that NetSyn performs better in synthesizing programs.
Machine Learning Projects for Iterated Distillation and Amplification
- Computer Science
- 2019
This document reviews IDA and proposes three projects that explore aspects of IDA, which applies IDA to problems in highschool mathematics and investigates whether learning to decompose problems can improve performance over supervised learning.
A Bibliography of Combinators
- PhilosophyArXiv
- 2021
Foundational Documents M. Schönfinkel (1924), “Über die Bausteine der mathematischen Logik” (in German) [“On the Building Blocks of Mathematical Logic”], Mathematische Annalen 92, 305–316. doi:…
Code Generation Based on Deep Learning: a Brief Review
- Computer Science
- 2021
This study introduces existing techniques of these two aspects and the corresponding DL techniques, and presents some possible future research directions.
Learning to Synthesize Programs as Interpretable and Generalizable Policies
- Computer ScienceNeurIPS
- 2021
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines while producing interpretable and more generalizable policies.
References
SHOWING 1-10 OF 14 REFERENCES
Neural Programmer-Interpreters
- Computer ScienceICLR
- 2016
The neural programmer-interpreter (NPI) is proposed, a recurrent and compositional neural network that learns to represent and execute programs and has the capability to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models.
Making Neural Programming Architectures Generalize via Recursion
- Computer ScienceICLR
- 2017
This work proposes augmenting neural architectures with a key abstraction: recursion, and implements recursion in the Neural Programmer-Interpreter framework on four tasks, demonstrating superior generalizability and interpretability with small amounts of training data.
Neural Programmer: Inducing Latent Programs with Gradient Descent
- Computer ScienceICLR
- 2016
This work proposes Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations and finds that training the model is difficult, but it can be greatly improved by adding random noise to the gradient.
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks
- Computer ScienceICML
- 2018
This paper introduces the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences, and tests the zero-shot generalization capabilities of a variety of recurrent neural networks trained on SCAN with sequence-to-sequence methods.
Neural GPUs Learn Algorithms
- Computer ScienceICLR
- 2016
It is shown that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances, and a technique for training deep recurrent networks: parameter sharing relaxation is introduced.
Hybrid computing using a neural network with dynamic external memory
- Computer ScienceNature
- 2016
A machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer.
Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks
- Computer ScienceICLR 2018
- 2017
This paper introduces the SCAN domain, consisting of a set of simple compositional navigation commands paired with the corresponding action sequences, and tests the zero-shot generalization capabilities of a variety of recurrent neural networks (RNNs) trained on SCAN with sequence-to-sequence methods.
Neural Random Access Machines
- Computer ScienceERCIM News
- 2016
The proposed model can learn to solve algorithmic tasks of such type and is capable of operating on simple data structures like linked-lists and binary trees and generalize to sequences of arbitrary length.
Modular Multitask Reinforcement Learning with Policy Sketches
- Computer Science, PsychologyICML
- 2017
Experiments show that using the approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
- Computer ScienceICML
- 2017
A new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks is introduced and a new neural architecture in the meta controller that learns when to update the subtask is proposed, which makes learning more efficient.