Corpus ID: 221083453

Learning a natural-language to LTL executable semantic parser for grounded robotics

@inproceedings{Wang2020LearningAN,
  title={Learning a natural-language to LTL executable semantic parser for grounded robotics},
  author={Christopher Wang and Candace Ross and Boris Katz and Andrei Barbu},
  booktitle={CoRL},
  year={2020}
}
Children acquire their native language with apparent ease by observing how language is used in context and attempting to use it themselves. They do so without laborious annotations, negative examples, or even direct corrections. We take a step toward robots that can do the same by training a grounded semantic parser, which discovers latent linguistic representations that can be used for the execution of natural-language commands. In particular, we focus on the difficult domain of commands with… Expand

Figures and Tables from this paper

A Case for Natural Language in Robot Learning
  • 2021
This note outlines a case for robot learning researchers to introduce 1 natural language into their research agenda. The importance of language in robot 2 learning goes beyond the potential ofExpand
Compositional RL Agents That Follow Language Commands in Temporal Logic
TLDR
A novel form of multi-task learning for RL agents is developed that allows them to learn from a diverse set of tasks and generalize to a new set of diverse tasks without any additional training. Expand
LTL2Action: Generalizing LTL Instructions for Multi-Task RL
TLDR
This work addresses the problem of teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments by introducing an environment-agnostic LTL pretraining scheme which improves sample-efficiency in downstream environments and exploits the compositional syntax and semantics of LTL. Expand
Safe Reinforcement Learning with Natural Language Constraints
TLDR
This paper develops a model that contains a constraint interpreter to encode natural language constraints into vector representations capturing spatial and temporal information on forbidden states, and a policy network that uses these representations to output a policy with minimal constraint violations. Expand

References

SHOWING 1-10 OF 46 REFERENCES
Learning to Parse Natural Language to Grounded Reward Functions with Weak Supervision
TLDR
It is shown that parsing models learned from small data sets can generalize to commands not seen during training, and enables an improvement of orders of magnitude in computation time over a baseline that performs planning during learning, while achieving comparable results. Expand
Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions
TLDR
This paper shows semantic parsing can be used within a grounded CCG semantic parsing approach that learns a joint model of meaning and context for interpreting and executing natural language instructions, using various types of weak supervision. Expand
Learning to Ground Language to Temporal Logical Form
Natural language commands often exhibit sequential constraints, e.g. “go through the kitchen and then into the living room” that (traditionally Markovian) methods in reinforcement learning (RL)Expand
Grounding language acquisition by training semantic parsers using captioned videos
TLDR
A semantic parser that is trained in a grounded setting using pairs of videos captioned with sentences that recovers the meaning of English sentences despite not having access to any annotated sentences. Expand
Weakly Supervised Semantic Parsing with Abstract Examples
TLDR
This work proposes that in closed worlds with clear semantic types, one can substantially alleviate problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form and results in sharing between different examples that alleviates the difficulties in training. Expand
From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood
TLDR
The goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available, and a new algorithm is presented that guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL. Expand
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
TLDR
A Neural Symbolic Machine is introduced, which contains a neural “programmer” that maps language utterances to programs and utilizes a key-variable memory to handle compositionality, and a symbolic “computer”, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. Expand
Learning to Parse Database Queries Using Inductive Logic Programming
TLDR
Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a preexisting, hand-crafted counterpart, and provide direct evidence of the utility of an empirical approach at the level of a complete natural language application. Expand
Synthesis of LTL Formulas from Natural Language Texts: State of the Art and Research Directions
TLDR
The current state of the art for what concerns the English-to-LTL translation problem is presented, and some possible research directions are outlined, outlining some possibleResearch directions. Expand
Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
TLDR
This work provides the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset and presents the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery. Expand
...
1
2
3
4
5
...