Susumu Seki

Learn More
In this paper, we propose a response model for a multi-modal human interface, by inserting listener responses particular times by detecting keywords from the user's utterances and controlling the face direction of a human-like Computer Graphics (CG) character according to the direction of the user's attention which is determined by tracing user's face. Then(More)
In this paper, we describe a multimodal interface prototype system based on Dynamical Dialogue Model. This system not only integrates information of speech and gestures, but also controls the response timing in order to realize a smooth interaction between user and computer. Our approach consists of human-human dialogue analysis, and computational modeling(More)
We propose a new appearance-based feature for real-time gesture recognition from motion images. The feature is the shape of the trajectory caused by human gestures, in the "Pattern Space" defined by the inner-product between patterns on frame images. It has three advantages, 1) it is invariant in term of the target human's position, size and lie, 2) it(More)
  • 1