The objective of this study is to automatically extract annotated sign data from the broadcast news recordings for the hearing impaired. These recordings present an excellent source for automatically generating annotated data: In news for the hearing impaired, the speaker also signs with the hands as she talks. On top of this, there is also corresponding… (More)
In this paper we discuss the design, acquisition and preprocessing of a Czech audiovisual speech corpus. The corpus is intended for training and testing of existing audiovisual speech recognition system. The name of the database is UWB-07-ICAVR, where ICAVR stands for Impaired Condition Audio Visual speech Recognition. The corpus consist of 10000 utterances… (More)
This paper discusses the design, recording and preprocessing of a Czech sign language corpus. The corpus is intended for training and testing of sign language recognition (SLR) systems. The UWB-07-SLR-P corpus contains video data of 4 signers recorded from 3 different perspectives. Two of the perspectives contain whole body and provide 3D motion data, the… (More)
In this paper we describe a HMM-based sign language recognition (SLR) system for isolated signs. In the first part we describe the image parametrization method producing features used for recognition. Our goal was to find the best combination of a feature space dimension reduction method and an HMM structure.
We describe the design, recording and content of a Czech Sign Language database in this paper. The database is intended for training and testing of sign language recognition (SLR) systems. The UWB-06-SLR-A database contains video data of 15 signers recorded from 3 different views, two of them capture whole body and provide 3D motion data, and third one is… (More)
This paper deals with novel automatic categorization of signs used in sign language dictionaries. The categorization provides additional information about lexical signs interpreted in the form of video files. We design a new method for automatic parameterization of these video files and categorization of the signs from extracted information. The method… (More)
This paper presents the design of a multimodal sign-language-enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The… (More)