Tetsushi Oka

Learn More
The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The(More)
We describe an implementation integrating a spoken dialogue system with a mobile robot, which the user can direct to specific locations, ask for information about its status, and supply information about its environment. The robot uses an internal map for navigation, and communicates its current orientation and accessible locations to the dialogue system(More)
In this study, we present a novel method for grasping of an unknown object on a planar surface. Given a single depth image, the planar surface and the object are extracted by employing Random Sample Consensus. Then, the principal axis of the object is approximated by means of Principal Component Analysis. The gripper of a robotic arm approaches the object(More)
We present an architecture for spoken dialogue systems where first-order inference (both theorem proving and model building) plays a crucial role in interpreting utterances of dialogue participants and deciding how the system should respond and carry out instructions. The dialogue itself is represented as a DRS which is translated into first-order logic for(More)
Godot is a mobile robot platform that serves as a testbed for the interface between a sophisticated lowlevel robot navigation and a symbolic high-level spoken dialogue system. The interesting feature of this combined system is that information flows in two directions: (1) The navigation system supplies landmark information from the cognitive map used for(More)
We describe a spoken dialogue interface with a mobile robot, which a human can direct to specific locations, ask for information about its status, and supply information about its environment. The robot uses an internal map for navigation, and communicates its current orientation and accessible locations to the dialogue system. In this article, we focus on(More)
We have already proposed a new concept of “universal multimedia access” intended to narrow the digital divide by providing appropriate multimedia expressions according to users’ (mental and physical) abilities, computer facilities, and network environments. Previous work has evaluated some types of multimedia user interfaces according to users’ (mental and(More)
The brain of an autonomous robot generates signals to the motors in each situation of the dynamical real world to achieve various tasks. The designer of it has to describe a complex system that maps the sensation and the mental state of the robot into the commands to the motors. In this paper, we present an approach to realizing a complex motion system by(More)
This article describes a multimodal command language for home robot users, and a robot system which interprets users’ messages in the language through microphones, visual and tactile sensors, and control buttons. The command language comprises a set of grammar rules, a lexicon, and nonverbal events detected in hand gestures, readings of tactile sensors(More)