Towards an Architecture Combining Grounding and Planning for Human-Robot Interaction

@inproceedings{Lu2015TowardsAA,
  title={Towards an Architecture Combining Grounding and Planning for Human-Robot Interaction},
  author={Dongcai Lu and Xiaoping Chen},
  booktitle={RoboCup},
  year={2015}
}
We consider here the problem of connecting natural language to the physical world for robotic object manipulation. This problem needs to be solved in robotic reasoning systems so that the robot can act in the real world. In this paper, we propose an architecture that combines grounding and planning to enable robots to solve such a problem. The grounding system of the architecture grounds the meaning of a natural language sentence in physical environment perceived by the robot's sensors and… 

Improving Grounded Natural Language Understanding through Human-Robot Dialog

This work presents an end-to-end pipeline for translating natural language commands to discrete robot actions, and uses clarification dialogs to jointly improve language parsing and concept grounding.

Interpreting and extracting open knowledge for human-robot interaction

A more effective learning method to interpret semi-structured user instructions and a new heuristic method to recover missing semantic information from the context of an instruction are presented.

Continuously Improving Natural Language Understanding for Robotic Systems through Semantic Parsing, Dialog, and Multi-modal Perception

This work proposes to combine these orthogonal components into an integrated robotic system that understands human commands involving both static domain knowledge and perceptual grounding, and proposes to strengthen the perceptual grounding component by performing word sense synonym set induction on object property words.

References

SHOWING 1-10 OF 21 REFERENCES

Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation

A new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments that dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure.

Grounding spatial relations for human-robot interaction

A system for human-robot interaction that learns both models for spatial prepositions and for object recognition, and grounds the meaning of an input sentence in terms of visual percepts coming from the robot's sensors to send an appropriate command to the PR2 or respond to spatial queries.

Learning to Parse Natural Language Commands to a Robot Control System

This work discusses the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system, and learns a parser based on example pairs of English commands and corresponding control language expressions.

Experiences with an Interactive Museum Tour-Guide Robot

STAIR: Hardware and Software Architecture

The hardware and software integration frameworks used to facilitate the development of these components and to bring them together for the demonstration of the STAIR 1 robot responding to a verbal command to fetch an item are described.

Toward understanding natural language directions

This work presents a system that follows natural language directions by extracting a sequence of spatial description clauses from the linguistic input and then infers the most probable path through the environment given only information about the environmental geometry and detected visible objects.

A Joint Model of Language and Perception for Grounded Attribute Learning

This work presents an approach for joint learning of language and perception models for grounded attribute induction, which includes a language model based on a probabilistic categorial grammar that enables the construction of compositional meaning representations.

Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World

This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment and finds that LSP outperforms existing, less expressive models that cannot represent relational language.

Learning Dependency-Based Compositional Semantics

A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.

Detecting and segmenting objects for mobile manipulation

A novel 3D scene interpretation approach for robots in mobile manipulation scenarios using a set of 3D point features (Fast Point Feature Histograms) and probabilistic graphical methods (Conditional Random Fields) to obtain dense depth maps in the robot's manipulators working space.