Mental imagery for a conversational robot

Abstract

To build robots that engage in fluid face-to-face spoken conversations with people, robots must have ways to connect what they say to what they see. A critical aspect of how language connects to vision is that language encodes points of view. The meaning of my left and your left differs due to an implied shift of visual perspective. The connection of language to vision also relies on object permanence. We can talk about things that are not in view. For a robot to participate in situated spoken dialog, it must have the capacity to imagine shifts of perspective, and it must maintain object permanence. We present a set of representations and procedures that enable a robotic manipulator to maintain a "mental model" of its physical environment by coupling active vision to physical simulation. Within this model, "imagined" views can be generated from arbitrary perspectives, providing the basis for situated language comprehension and production. An initial application of mental imagery for spatial language understanding for an interactive robot is described.

DOI: 10.1109/TSMCB.2004.823327

Extracted Key Phrases

8 Figures and Tables

01020'04'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

120 Citations

Semantic Scholar estimates that this publication has 120 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Roy2004MentalIF, title={Mental imagery for a conversational robot}, author={Deb Roy and Kai-yuh Hsiao and Nikolaos Mavridis}, journal={IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society}, year={2004}, volume={34 3}, pages={1374-83} }