Corpus ID: 18223267

Towards Multi-Agent Communication-Based Language Learning

@article{Lazaridou2016TowardsMC,
  title={Towards Multi-Agent Communication-Based Language Learning},
  author={Angeliki Lazaridou and N. Pham and Marco Baroni},
  journal={ArXiv},
  year={2016},
  volume={abs/1605.07133}
}
We propose an interactive multimodal framework for language learning. Instead of being passively exposed to large amounts of natural text, our learners (implemented as feed-forward neural networks) engage in cooperative referential games starting from a tabula rasa setup, and thus develop their own language from the need to communicate in order to succeed at the game. Preliminary experiments provide promising results, but also suggest that it is important to ensure that agents trained in this… Expand
A Paradigm for Situated and Goal-Driven Language Learning
TLDR
A general situated language learning paradigm is proposed which is designed to bring about robust language agents able to cooperate productively with humans. Expand
Translating Neuralese
TLDR
This work develops a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language. Expand
Emergence of Communication in an Interactive World with Consistent Speakers
TLDR
A new model and training algorithm is proposed, that utilizes the structure of a learned representation space to produce more consistent speakers at the initial phases of training, which stabilizes learning and increases context-independence compared to policy gradient and other competitive baselines. Expand
Compositional Grounded Language for Agent Communication in Reinforcement Learning Environment
TLDR
By constraining the language to satisfy the two main human language properties of being ground-based and of compositionality, rapidly converging evolution of syntactic communication is obtained, opening the way of a meaningful language between machines. Expand
Zero-Resource Neural Machine Translation with Multi-Agent Communication Game
TLDR
This work proposes an interactive multimodal framework for zero-resource neural machine translation, where learners engage in cooperative image description games, and thus develop their own image captioning or neural machinetranslation model from the need to communicate in order to succeed at the game. Expand
Emergence of Grounded Compositional Language in Multi-Agent Populations
TLDR
This paper proposes a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language that is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. Expand
A Deep Reinforcement Learning Chatbot
TLDR
MILA's MILABOT is capable of conversing with humans on popular small talk topics through both speech and text and consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. Expand
Multi-Agent Discussion Mechanism for Natural Language Generation
TLDR
The proposed multi-agent discussion mechanism is helpful for maximizing the utility of the communication between agents and introduced into the multiagent communicating encoder-decoder architecture for Natural Language Generation (NLG) tasks. Expand
Analogs of Linguistic Structure in Deep Representations
TLDR
This work investigates the compositional structure of message vectors computed by a deep network trained on a communication game and suggests that neural representations are capable of spontaneously developing a “syntax" with functional analogues to qualitative properties of natural language. Expand
Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI
TLDR
This article argues for the domain of voice document editing and for the methods of model-based reinforcement learning for achieving conversational AI. Expand
...
1
2
...

References

SHOWING 1-10 OF 39 REFERENCES
Temporal Difference Learning and TD-Gammon
  • G. Tesauro
  • Computer Science
  • J. Int. Comput. Games Assoc.
  • 1995
TLDR
TD-GAMMON is a neural network that trains itself to be an evaluation function for the game of backgammon by playing against itself and learning from the outcome. Expand
Dialog-based Language Learning
TLDR
This work studies dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation, and shows that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher's response. Expand
Learning to Compose Neural Networks for Question Answering
TLDR
A question answering model that applies to both images and structured knowledge bases that uses natural language strings to automatically assemble neural networks from a collection of composable modules that achieves state-of-the-art results on benchmark datasets. Expand
Asking for Help Using Inverse Semantics
TLDR
This work demonstrates an approach for enabling a robot to recover from failures by communicating its need for specific help to a human partner using natural language, and presents a novel inverse semantics algorithm for generating effective help requests. Expand
Human-level control through deep reinforcement learning
TLDR
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. Expand
A Roadmap Towards Machine Intelligence
TLDR
A simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users is discussed. Expand
Reinforcement Learning: An Introduction
TLDR
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications. Expand
Mastering the game of Go with deep neural networks and tree search
TLDR
Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go. Expand
Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions
TLDR
A recommended algorithm is described, along with a specification of the resources a host system must provide in order to make use of the algorithm, and an implementation used in the natural language generation component of the IDAS system. Expand
Conceptual pacts and lexical choice in conversation.
TLDR
Evidence from 3 experiments favors a historical account and suggests that when speakers refer to an object, they are proposing a conceptualization of it, a proposal their addresses may or may not agree to. Expand
...
1
2
3
4
...