An Ontology-based Symbol Grounding System for Human-Robot Interaction

Abstract

One of the fundamental issues in HRI is to enable robust mappings of the robot’s low-level perceptions of the world and the symbolic terms used by humans to describe the world. When a shared symbol grounding is achieved (when human and robot use the same terms to denote the same physical entities) a two way interaction is enabled. This interaction is an important step towards robots assisting humans in everyday tasks at home, as the human can easily understand the “intelligence” of the robot in a domain, and in turn the robot can query the human to bootstrap more knowledge to better assist in complex or novel situations. A symbol grounding system must regularize the connections between the sensed physical world and language. Consider such a system running on a home care robot. If a box of pasta appears in front of the robot’s sensors, a symbol identifying it (“pasta box”) should be generated with high consistency. The architecture should maintain the coherence of perceptually-grounded symbols over time, so knowledge of the location, permanence, and ubiquity of certain items is needed in order to track pasta box1, for instance, and distinguish it from others. If something temporarily occludes pasta box1 from the sensors, the architecture should not create a new symbol for the object when it reappears. If another box of pasta appears in a different place at the same time, then a second pasta symbol should be created, as one object cannot be in two places at once. This paper presents a preliminary overview to the symbol grounding problem for HRI that relies on monocular vision processing and hierarchical ontologies to help define symbols. Our approach focuses on the use of a long-term memory model for a robot in a home environment that persists over a significant duration. The robot must learn and remember the properties, locations, and functions of hundreds of objects with which the homeowner interacts during normal activities. Starting with an a priori long-term memory stored in an ontology that will be updated throughout its operation, the robot then needs to link its perception of objects as well as actions (both its own and the homeowner’s) to representations in its long-term memory. The long-term memory needs to be linked to both a working memory and

Cite this paper

@inproceedings{Beeson2016AnOS, title={An Ontology-based Symbol Grounding System for Human-Robot Interaction}, author={Patrick Beeson and Peter Bonasso and Andreas Persson and Amy Loutfi and Jonathan P. Bona}, year={2016} }