Towards a one-way American sign language translator

@article{McGuire2004TowardsAO,
  title={Towards a one-way American sign language translator},
  author={R. Martin McGuire and Jos{\'e} Luis Hernandez-Rebollar and Thad Starner and Valerie L. Henderson and Helene Brashear and Danielle S. Ross},
  journal={Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings.},
  year={2004},
  pages={620-625}
}
Inspired by the Defense Advanced Research Projects Agency's (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular… 

Figures and Tables from this paper

A novel approach to American Sign Language (ASL) phrase verification using reversed signing

TLDR
The results show that for the new method the alignment selected for signs in a test phrase has a significantly better match to the ground truth when compared to the traditional approach.

Italian Sign Language (LIS) and Natural Language Processing: an Overview

TLDR
Novel strategies for sign transcription that consider both the need for standardized writing forms to enable NLP, as well as the language-specific features of SLs that are conveyed through the visual-manual channel are discussed.

American Sign Language Phrase Verification in an Educational Game for Deaf Children

TLDR
Real-time American Sign Language (ASL) phrase verification for an educational game, CopyCat, which is designed to improve deaf children's signing skills is performed, using Hidden Markov Models (HMMs), by applying a rejection threshold on the probability of the observed sequence for each sign in the phrase.

American Sign Language recognition system for hearing impaired people using Cartesian Genetic Programming

  • F. Ullah
  • Computer Science
    The 5th International Conference on Automation, Robotics and Applications
  • 2011
TLDR
A ASL based hand gesture recognition system is presented that uses evolutionary programming technique called Cartesian Genetic Programming (CGP) and a chat application is proposed with a possible solution to boost the accuracy of the recognition up to 100%.

A Computational Framework for Indian Sign Language Recognition

TLDR
This research tried, the recognition of simple ISL sentences following Subject-Object-Verb pattern, and the combination of HOG and LBP descriptors was found promising in addressing these complexities.

Generating Data for Signer Adaptation

TLDR
A method of signer adaptation with little data for continuous density hidden Markov models (HMMs) is presented and experimental results demonstrate that the proposed method has similar performance compared with that using the original samples of 350 sign words as adaptation data.

Development in Signer-Independent Sign Language Recognition and the Ideas of Solving Some Key Problems

TLDR
This paper proposes a new research frame for signer-independent sign language recognition, and provides the strategy and ideas to solve the problem.

Improving the efficacy of automated sign language practice tools

TLDR
An automatic sign language recognition system forCopyCat is created and it is believed that the accuracy of this system can be improved by characterizing and modeling disfluencies found in the children's signing.

American sign language recognition in game development for deaf children

TLDR
This work evaluated the approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data, and achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.

Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning

TLDR
Data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures are examined and the overall progress toward a true test of sign recognition systems--dealing with natural signing by native signers is discussed.

References

SHOWING 1-10 OF 21 REFERENCES

A new instrumented approach for translating American Sign Language into sound and text

TLDR
The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy and represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs).

Speech and language processing for a constrained speech translation system

  • S. Cox
  • Computer Science, Linguistics
    INTERSPEECH
  • 2002
TLDR
A system that provides translation from speech to sign language (TESSA) is described, and results obtained using alternative formulations of the phrases from a number of speakers are given.

A real-time continuous gesture recognition system for sign language

TLDR
A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove using hidden Markov models for 51 fundamental postures, 6 orientations, and 8 motion primitives.

Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video

We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first

The ATIS Spoken Language Systems Pilot Corpus

TLDR
This pilot marks the first full-scale attempt to collect a corpus to measure progress in Spoken Language Systems that include both a speech and natural language component and provides guidelines for future efforts.

Large vocabulary sign language recognition based on hierarchical decision trees

TLDR
A hierarchical decision tree is first presented for large vocabulary sign language recognition based on the divide-and-conquer principle and drastically reduces the recognition time by 11 times and also improves the recognition rate about 0.95% over single SOFM/HMM.

ASL recognition based on a coupling between HMMs and 3D motion analysis

TLDR
It is demonstrated that context-dependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.

The Reactive Keyboard

TLDR
This book describes the Reactive Keyboard, a system that greatly speeds communication by predicting the user's next response before it is made, although it does not always predict correctly.

Georgia tech gesture toolkit: supporting experiments in gesture recognition

TLDR
The Georgia Tech Gesture Toolkit GT2k is introduced which leverages Cambridge University's speech recognition toolkit, HTK, to provide tools that support gesture recognition research and four ongoing projects that utilize the toolkit in a variety of domains are presented.

Using multiple sensors for mobile sign language recognition

We build upon a constrained, lab-based Sign Languagerecognition system with the goal of making it a mobile assistivetechnology. We examine using multiple sensors for disambiguationof noisy data to