Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

@article{Parton2006SignLR,
  title={Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.},
  author={Becky Sue Parton},
  journal={Journal of deaf studies and deaf education},
  year={2006},
  volume={11 1},
  pages={
          94-101
        }
}
  • B. Parton
  • Published 28 September 2005
  • Computer Science
  • Journal of deaf studies and deaf education
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network… 
An automatic machine translation system for multi-lingual speech to Indian sign language
TLDR
Usability testing based on survey results confirm that the proposed SISLA system is suitable for education as well as communication purpose for hearing impaired people.
CBIR approach to the recognition of a sign language alphabet
TLDR
The paper describes both the methodology used for building up the DB of image samples and the experimental study for the noise-tolerance of the available CBIR method, to acknowledge the applicability of the proposed approach.
Sequence-to-Sequence Natural Language to Humanoid Robot Sign Language
TLDR
A study on natural language to sign language translation with human-robot interaction application purposes and neural networks are selected as a data-driven system to avoid traditional expert system approaches or temporal dependencies limitations that lead to limited or too complex translation systems.
SELFIE SIGN LANGUAGE RECOGNITION WITH MULTIPLE FEATURES ON ADABOOST MULTILABEL MULTICLASS CLASSIFIER
TLDR
The objective is to take sign language recognition towards real time mobile implementation as a communication link between hearing impaired and normal people and to decrease the computations per frame, the algorithms are developed for mobile platforms.
Bangla Sign Language Recognition and Sentence Building Using Deep Learning
TLDR
This research is using the Convolutional Neural Network (CNN) for training each individual sign in Bangla Sign Language and aims to create a multi modal system to for recognising Bangla signs.
Reconstruction of Convolutional Neural Network for Sign Language Recognition
TLDR
The propose system outperformed other published results in the comparative analysis, hence recommended for further exploitation in sign language recognition problems.
Selfie Sign Language Recognition with Convolutional Neural Networks
TLDR
This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN), and achieves 92.88 % recognition rate compared to other classifier models reported on the same dataset.
Special Characters of Vietnamese Sign Language Recognition System Based on Virtual Reality Glove
TLDR
A method of recognition numbers and special characters of Vietnam sign language using a glove-based gesture recognition system and a dynamic time warping algorithm is introduced.
A depth-based Indian Sign Language recognition using Microsoft Kinect
TLDR
An efficient algorithm for translating the input hand gesture in Indian Sign Language (ISL) into meaningful English text and speech is introduced.
Robust Sign Language Recognition with Hierarchical Conditional Random Fields
TLDR
A novel method for spotting signs and fingerspellings is proposed, which can distinguish signs, fingerspelling, and nonsign patterns through a hierarchical framework consisting of three steps.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 71 REFERENCES
A new instrumented approach for translating American Sign Language into sound and text
TLDR
The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy and represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs).
Animating Sign Language in the Real Time
TLDR
The paper presents selected problems of visualizing animated sign language sentences in real time, a part of a system for translation of texts into the sign language, using Szczepankowski’s gestographic notation.
Gesture recognition for virtual reality applications using data gloves and neural networks
  • J. Weissmann, R. Salomon
  • Computer Science
    IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
  • 1999
TLDR
A data glove is explored as the input device for the use of hand gestures as a means of human-computer interactions for virtual reality applications, and the performance of different neural network models, such as backpropagation and radial-basis functions, are compared.
Real-time gesture recognition using deterministic boosting
TLDR
A gesture recognition system which can reliably recognize single-hand gestures in real time on a 600Mhz notebook computer is described and is demonstrated controlling a windowed operating system, editing a document and performing file-system operations with extremely low error rates over long time periods.
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first
Robust Face Detection and Japanese Sign Language Hand Posture Recognition for Human-Computer Interaction in an “ Intelligent ” Room †
TLDR
A system for the detection of human faces and for the classification of hand postures of the Japanese Sign Language in color images inside an "intelligent" room is presented to contribute to the implementation of meaningful humanmachine interactions in a room that is in the process of establishing, the “percept-room”, mainly for welfare applications.
A machine translation system from English to American Sign Language
TLDR
This paper prototype a machine translation system from English to American Sign Language (ASL), taking into account not only linguistic but also visual and spatial information associated with ASL signs.
An Image Processing Technique for the Translation of ASL Finger-Spelling to Digital Audio and Text
TLDR
This work is phase one of a broader project, The Sign2 Project, that is focused on a complete technological approach to the translation of ASL to digital audio and/or text.
A Tutor for Teaching English as a Second Language for Deaf Users of American Sign Language
TLDR
The particular difficulties faced by the deaf writer learning English are addressed and a system with the capabilities of accepting input via an essay written by a user, analyzing that essay for errors, and then engaging the user in tutorial dialogue aimed toward improving his/her overall literacy is created.
Finding Relevant Image Content for mobile Sign Language Recognition
TLDR
The problem of finding relevant information in single-view image sequences is tackled by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images.
...
1
2
3
4
5
...