• Corpus ID: 725698

Co-Acquisition of Syntax and Semantics - An Investigation in Spatial Language

@inproceedings{Spranger2015CoAcquisitionOS,
  title={Co-Acquisition of Syntax and Semantics - An Investigation in Spatial Language},
  author={Michael Spranger and Luc L. Steels},
  booktitle={IJCAI},
  year={2015}
}
This paper reports recent progress on modeling the grounded co-acquisition of syntax and semantics of locative spatial language in developmental robots. We show how a learner robot can learn to produce and interpret spatial utterances in guided-learning interactions with a tutor robot (equipped with a system for producing English spatial phrases). The tutor guides the learning process by simplifying the challenges and complexity of utterances, gives feedback, and gradually increases the… 

Incremental grounded language learning in robot-robot interactions — Examples from spatial language

  • Michael Spranger
  • Linguistics
    2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)
  • 2015
TLDR
Models of the grounded co-acquisition of syntax and semantics of locative spatial language in developmental robots and how a learner robot can learn to produce and interpret spatial utterances in guided-learning interactions with a tutor robot are reported on.

Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands

TLDR
The aim of this work is to teach a robot manipulator how to execute natural language commands by demonstration by first learning a set of visual ‘concepts’ that abstract the visual feature spaces into concepts that have human-level meaning.

Natural Language Acquisition and Grounding for Embodied Robotic Systems

TLDR
A cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment and the knowledge learned is used to parse new commands involving previously unseen objects.

Unsupervised Natural Language Acquisition and Grounding to Visual Representations for Robotic Systems

TLDR
A cognitively plausible novel framework capable of learning the components of natural language for robotic systems in a real world environment and shows that the knowledge gained can be used to parse novel linguistic commands involving previously unseen objects.

Grounded Language Learning: Where Robotics and NLP Meet

TLDR
An overview of the research area, selected recent advances, and some future directions and challenges that remain are given.

Improving Grounded Natural Language Understanding through Human-Robot Dialog

TLDR
This work presents an end-to-end pipeline for translating natural language commands to discrete robot actions, and uses clarification dialogs to jointly improve language parsing and concept grounding.

Learning to Parse Grounded Language using Reservoir Computing

  • Xavier HinautMichael Spranger
  • Computer Science
    2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)
  • 2019
TLDR
This paper develops a model of Reservoir Computing called Reservoir Parser (ResPars) for learning to parse Natural Language from grounded data coming from humanoid robots and shows that ResPars is able to generalize on grounded compositional semantics by combining it with Incremental Recruitment Language.

Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

TLDR
This work proposes a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation.

Learning of Object Properties, Spatial Relations, and Actions for Embodied Agents from Language and Vision

TLDR
A system that enables embodied agents to learn about different components of the perceived world, such as object properties, spatial relations, and actions by connecting two different sensory inputs: language and vision is presented.

References

SHOWING 1-10 OF 28 REFERENCES

Grounded lexicon acquisition — Case studies in spatial language

  • Michael Spranger
  • Linguistics
    2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)
  • 2013
TLDR
This paper identifies how various spatial language systems, such as projective, absolute and proximal can be learned and shows how multiple systems can be acquired at the same time.

Learning to Interpret Natural Language Navigation Instructions from Observations

TLDR
A system that learns to transform natural-language navigation instructions into executable formal plans by using a learned lexicon to refine inferred plans and a supervised learner to induce a semantic parser.

Acquisition of Grammar in Autonomous Artificial Systems

TLDR
A comprehensive chain of computational processes is considered, starting from conceptualization and extending through language generation and interpretation, and it is shown how they can be intertwined to allow for acquisition of complex aspects of grammar.

A Computational Model of Early Argument Structure Acquisition

TLDR
A computational model for the representation, acquisition, and use of verbs and constructions is presented, founded on a novel view of constructions as a probabilistic association between syntactic and semantic features.

Learning perceptually grounded word meanings from unaligned parallel data

TLDR
This paper presents an approach to grounded language acquisition which is capable of jointly learning a policy for following natural language commands such as “Pick up the tire pallet,” as well as a mapping between specific phrases in the language and aspects of the external world.

A constructivist approach to robot language learning via simulated babbling and holophrase extraction

TLDR
The issues behind and the design of the currently ongoing and forthcoming experiments aimed to allow a robot to carry out language learning in a manner analogous to that in early child development and which effectively ‘short cuts’ holophrase learning are reported on.

Modeling Embodied Lexical Development

TLDR
The verb learning model is placed in the broader context of the L0 project on embodied natural language and its acquisition and employs a novel form of active representation and is explicitly intended to be neurally plausible.

Following directions using statistical machine translation

TLDR
This work investigates how statistical machine translation techniques can be used to bridge the gap between natural language route instructions and a map of an environment built by a robot.

A computational study of cross-situational techniques for learning word-to-meaning mappings

Computational models of incremental semantic interpretation

TLDR
A tutorial introduction to the computational problem of “incremental” semantic interpretation: the problem of computing the semantic interpretation of a sentence on a word-by-word basis as the sentence is read from left to right.