This paper presents our recent attempt to make a super-large scale spoken-term detection system, which can detect any keyword uttered in a 2,000-hour speech database within a few seconds. There are three problems to achieve such a system. The system must be able to detect out-of-vocabulary (OOV) terms (OOV problem). The system has to respond to the user… (More)
An information kiosk with a JSL (Japanese sign language) recognition system that allows hearing-impaired people to easily search for various kinds of information and services was tested in a government office. This kiosk system was favorably received by most users.
Speech recognition errors are inevitable in a speech dialog system. This paper presents an error handling method based on correction grammars which recognize the correction utterances which follow a recognition error. Correction grammars are dynamically created from existing grammars and a set of correction templates. We also describe a prototype dialog… (More)
In recent years, the number of sign language learners is increasing in Japan. And there are many teaching materials of sign language such as textbooks, videotapes and software for PCs. However, these teaching materials have several problems that learners cannot study sign language sufficiently because the learners can mainly study manual gestures, cannot… (More)
Sign language is one means of communication for hearing-impaired people. Words and sentences in sign language are mainly represented by hands' gestures. In this report, we show a sign language translation system which we are developing. The system translates Japanese sign language into Japanese and vice versa. In this system, hand shape and position data… (More)
Sign language gestures are inflected in accordance with the context. To recognize such sign language properly, the structure of sign kmgoage must be made clear. It is well known that the structure of sign Iangoage is represented as a combination of basic components of gestures. Sign language can be recognized by using such components. In this paper, a… (More)
The aim of this paper is to develop animated agents that can control multimodal instruction dialogues by monitoring user's behaviors. First, this paper reports on our Wizard-of-Oz experiments, and then, using the collected corpus, proposes a probabilis-tic model of fine-grained timing dependencies among multimodal communication behaviors: speech, gestures,… (More)