Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication

@inproceedings{She2017InteractiveLO,
  title={Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication},
  author={Lanbo She and Joyce Yue Chai},
  booktitle={ACL},
  year={2017}
}
To enable human-robot communication and collaboration, previous works represent grounded verb semantics as the potential change of state to the physical world caused by these verbs. Grounded verb semantics are acquired mainly based on the parallel data of the use of a verb phrase and its corresponding sequences of primitive actions demonstrated by humans. The rich interaction between teachers and students that is considered important in learning new skills has not yet been explored. To address… 

Figures and Tables from this paper

Improved Models and Queries for Grounded Human-Robot Dialog
TLDR
This work presents completed work on jointly learning a dialog policy that enables a robot to clarify partially understood natural language commands, while simultaneously using the dialogs to improve the underlying semantic parser for future commands.
Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations
TLDR
Bayesian learning is proposed to resolve inconsistencies between the natural language grounding and a robot’s world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human.
Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog
TLDR
A dialog agent for robots that is able to interpret user commands using a semantic parser, while asking clarification questions using a probabilistic dialog manager to augment its knowledge base and improve its language capabilities by learning from dialog experiences.
Multimodal estimation and communication of latent semantic knowledge for robust execution of robot instructions
TLDR
A probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence is introduced.
Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions
TLDR
This work proposes a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation.
Simultaneous Intention Estimation and Knowledge Augmentation via Human-Robot Dialog
TLDR
A dialog agent for robots that is able to interpret user commands using a semantic parser, while asking clarification questions using a probabilistic dialog manager to augment its knowledge base and improve its language capabilities by learning from dialog experiences.
Language to Action: Towards Interactive Task Learning with Physical Agents
TLDR
A brief introduction to interactive task learning where humans can teach physical agents new tasks through natural language communication and action demonstration and highlights the importance of commonsense knowledge, particularly the very basic physical causality knowledge, in grounding language to perception and action.
Language-guided Semantic Mapping and Mobile Manipulation in Partially Observable Environments
TLDR
A novel framework that learns to adapt perception according to the task in order to maintain compact distributions over semantic maps is proposed and experiments with a mobile manipulator demonstrate more efficient instruction following in a priori unknown environments.
Language Understanding for Field and Service Robots in a Priori Unknown Environments
TLDR
This paper provides a comprehensive description of a novel learning framework that allows field and service robots to interpret and correctly execute natural-language instructions in a priori unknown, unstructured environments and uses imitation learning to identify a beliefspace policy that reasons over the environment and behavior distributions.
Interactive Learning of State Representation through Natural Language Instruction and Explanation
TLDR
This extended abstract gives a brief introduction to the on-going work that aims to enable the robot to acquire new state representations through language communication with humans.
...
1
2
3
4
...

References

SHOWING 1-10 OF 37 REFERENCES
Incremental Acquisition of Verb Hypothesis Space towards Physical World Interaction
TLDR
This paper presents an approach that explicitly represents verb semantics through hypothesis spaces of fluents and automatically acquires these hypothesis spaces by interacting with humans and applies incremental learning, which can contribute to life-long learning from humans in the future.
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
TLDR
This paper builds perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision to ground natural language words describing objects using supervision from an interactive humanrobot "I Spy" game.
Learning to Mediate Perceptual Differences in Situated Human-Robot Dialogue
TLDR
The empirical evaluation has shown that this weight-learning approach can successfully adjust the weights to reflect the robot’s perceptual limitations and can lead to a significant improvement for referential grounding in future dialogues.
Back to the Blocks World: Learning New Actions through Situated Human-Robot Dialogue
TLDR
A threetier action knowledge representation is developed that supports the connection between symbolic representations of language and continuous sensorimotor representations of the robot and supports the application of existing planning algorithms to address novel situations.
Jointly Learning Grounded Task Structures from Language Instruction and Visual Demonstration
TLDR
The empirical results on a cloth-folding domain have shown that, although state detection through visual processing is full of uncertainties and error prone, by a tight integration with language the agent is able to learn an effective AoG for task representation.
Learning perceptually grounded word meanings from unaligned parallel data
TLDR
This paper presents an approach to grounded language acquisition which is capable of jointly learning a policy for following natural language commands such as “Pick up the tire pallet,” as well as a mapping between specific phrases in the language and aspects of the external world.
Collaborative Models for Referring Expression Generation in Situated Dialogue
TLDR
Two collaborative models are developed - an episodic model and an installment model - for referring expression generation that generate multiple small expressions that lead to the target object with the goal of minimizing the collaborative effort.
Learning to Parse Natural Language Commands to a Robot Control System
TLDR
This work discusses the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system, and learns a parser based on example pairs of English commands and corresponding control language expressions.
Learning to Interpret Natural Language Commands through Human-Robot Dialog
TLDR
This work introduces a dialog agent for mobile robots that understands human instructions through semantic parsing, actively resolves ambiguities using a dialog manager, and incrementally learns from human-robot conversations by inducing training data from user paraphrases.
A Joint Model of Language and Perception for Grounded Attribute Learning
TLDR
This work presents an approach for joint learning of language and perception models for grounded attribute induction, which includes a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations.
...
1
2
3
4
...