Learning Language Games through Interaction

@article{Wang2016LearningLG,
  title={Learning Language Games through Interaction},
  author={Sida I. Wang and Percy Liang and Christopher D. Manning},
  journal={ArXiv},
  year={2016},
  volume={abs/1606.02447}
}
We introduce a new language learning setting relevant to building adaptive natural language interfaces. [...] Key Method We created a game in a blocks world and collected interactions from 100 people playing it. First, we analyze the humans' strategies, showing that using compositionality and avoiding synonyms correlates positively with task performance. Second, we compare computer strategies, showing how to quickly learn a semantic parsing model from scratch, and that modeling pragmatics further accelerates…Expand

Figures, Tables, and Topics from this paper

Learning Adaptive Language Interfaces through Decomposition
TLDR
A neural semantic parsing system that learns new high-level abstractions through decomposition is introduced, demonstrating the flexibility of modern neural systems, as well as the one-shot reliable generalization of grammar-based methods. Expand
Naturalizing a Programming Language via Interactive Learning
TLDR
This work starts with a core programming language and allows users to "naturalize" the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. Expand
SHAPELURN: An Interactive Language Learning Game with Logical Inference
We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game thatExpand
Coherence, Symbol Grounding and Interactive Task Learning
To teach agents through natural language interaction, we need methods for updating the agent’s knowledge, given a teacher’s feedback. But natural language is ambiguous at many levels and so a majorExpand
Teaching Machines to Classify from Natural Language Interactions
TLDR
It is demonstrated that language can define rich and expressive features for learning tasks, and machine learning can benefit substantially from this ability, and new algorithms for semantic parsing are developed that incorporate pragmatic cues, including conversational history and sensory observation, to improve automatic language interpretation. Expand
Multi-Agent Cooperation and the Emergence of (Natural) Language
TLDR
It is shown that two networks with simple configurations are able to learn to coordinate in the referential game and how to make changes to the game environment to cause the "word meanings" induced in the game to better reflect intuitive semantic properties of the images. Expand
A Survey of Reinforcement Learning Informed by Natural Language
TLDR
The time is right to investigate a tight integration of natural language understanding into Reinforcement Learning in particular, and the state of the field is surveyed, including work on instruction following, text games, and learning from textual domain knowledge. Expand
Apple Core-dination: Linguistic Feedback and Learning in a Speech-to-Action Shared World Game
We investigate the question of how adaptive feedback from a virtual agent impacts the linguistic input of the user in a shared world game environment. To do so, we carry out an exploratory pilotExpand
Learning Plans by Acquiring Grounded Linguistic Meanings from Corrections
TLDR
It is shown that an agent which does utilise linguistic evidence outperforms a strong baseline which does not and an agent that takes advantage of the linguistic evidence must learn the denotation of neologisms and adapt its conceptualisation of the planning domain to incorporate those denotations. Expand
Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game
TLDR
A joint imitation and reinforcement approach for grounded language learning through an interactive conversational game is proposed and the agent trained is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Learning and using language via recursive pragmatic reasoning about other agents
TLDR
A model in which language learners assume that they jointly approximate a shared, external lexicon and reason recursively about the goals of others in using this lexicon is described, which leads to insights about the emergence of communicative systems in conversation and the mechanisms by which pragmatic inferences become incorporated into word meanings. Expand
Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation
TLDR
This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments that dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Expand
Understanding natural language
TLDR
A computer system for understanding English that contains a parser, a recognition grammar of English, programs for semantic analysis, and a general problem solving system based on the belief that in modeling language understanding, it must deal in an integrated way with all of the aspects of language—syntax, semantics, and inference. Expand
Fast Online Lexicon Learning for Grounded Language Acquisition
TLDR
A new online algorithm that is an order of magnitude faster and surpasses the state-of-the-art results is introduced and it is shown that by changing the grammar of the formal meaning representation language and training on additional data collected from Amazon's Mechanical Turk the authors can further improve the results. Expand
Learning to interpret natural language navigation instructions from observations
TLDR
A system that learns to transform natural-language navigation instructions into executable formal plans by using a learned lexicon to refine inferred plans and a supervised learner to induce a semantic parser. Expand
Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions
TLDR
This paper shows semantic parsing can be used within a grounded CCG semantic parsing approach that learns a joint model of meaning and context for interpreting and executing natural language instructions, using various types of weak supervision. Expand
Learning in the Rational Speech Acts Model
TLDR
This work shows how to define and optimize a trained statistical classifier that uses the intermediate agents of RSA as hidden layers of representation forming a non-linear activation function, which opens up new application domains and new possibilities for learning effectively from data. Expand
Grounding Verbs of Motion in Natural Language Commands to Robots
TLDR
This work presents an algorithm for understanding natural language commands with three components, creating a cost function that scores the language according to how well it matches a candidate plan in the environment, defined as the log-likelihood of the plan given the command. Expand
A Joint Model of Language and Perception for Grounded Attribute Learning
TLDR
This work presents an approach for joint learning of language and perception models for grounded attribute induction, which includes a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations. Expand
Learning Meanings of Words and Constructions, Grounded in a Virtual Game
TLDR
It is shown how simple association metrics can be used to extract words, phrases and more abstract syntactic patterns with targeted meanings or speech-act functions, by making use of the nonlinguistic context. Expand
...
1
2
3
4
...