Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.
@article{Parton2006SignLR, title={Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.}, author={Becky Sue Parton}, journal={Journal of deaf studies and deaf education}, year={2006}, volume={11 1}, pages={ 94-101 } }
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network…
61 Citations
An automatic machine translation system for multi-lingual speech to Indian sign language
- Computer ScienceMultimedia Tools and Applications
- 2021
Usability testing based on survey results confirm that the proposed SISLA system is suitable for education as well as communication purpose for hearing impaired people.
CBIR approach to the recognition of a sign language alphabet
- Computer ScienceCompSysTech '07
- 2007
The paper describes both the methodology used for building up the DB of image samples and the experimental study for the noise-tolerance of the available CBIR method, to acknowledge the applicability of the proposed approach.
Sequence-to-Sequence Natural Language to Humanoid Robot Sign Language
- Computer ScienceArXiv
- 2019
A study on natural language to sign language translation with human-robot interaction application purposes and neural networks are selected as a data-driven system to avoid traditional expert system approaches or temporal dependencies limitations that lead to limited or too complex translation systems.
SELFIE SIGN LANGUAGE RECOGNITION WITH MULTIPLE FEATURES ON ADABOOST MULTILABEL MULTICLASS CLASSIFIER
- Computer Science
- 2018
The objective is to take sign language recognition towards real time mobile implementation as a communication link between hearing impaired and normal people and to decrease the computations per frame, the algorithms are developed for mobile platforms.
Bangla Sign Language Recognition and Sentence Building Using Deep Learning
- Computer Science2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)
- 2020
This research is using the Convolutional Neural Network (CNN) for training each individual sign in Bangla Sign Language and aims to create a multi modal system to for recognising Bangla signs.
Reconstruction of Convolutional Neural Network for Sign Language Recognition
- Computer Science2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE)
- 2020
The propose system outperformed other published results in the comparative analysis, hence recommended for further exploitation in sign language recognition problems.
Selfie Sign Language Recognition with Convolutional Neural Networks
- Computer ScienceInternational Journal of Intelligent Systems and Applications
- 2018
This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN), and achieves 92.88 % recognition rate compared to other classifier models reported on the same dataset.
Special Characters of Vietnamese Sign Language Recognition System Based on Virtual Reality Glove
- Computer Science
- 2016
A method of recognition numbers and special characters of Vietnam sign language using a glove-based gesture recognition system and a dynamic time warping algorithm is introduced.
A depth-based Indian Sign Language recognition using Microsoft Kinect
- Computer Science
- 2020
An efficient algorithm for translating the input hand gesture in Indian Sign Language (ISL) into meaningful English text and speech is introduced.
Robust Sign Language Recognition with Hierarchical Conditional Random Fields
- Computer Science2010 20th International Conference on Pattern Recognition
- 2010
A novel method for spotting signs and fingerspellings is proposed, which can distinguish signs, fingerspelling, and nonsign patterns through a hierarchical framework consisting of three steps.
References
SHOWING 1-10 OF 71 REFERENCES
A new instrumented approach for translating American Sign Language into sound and text
- Computer ScienceSixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings.
- 2004
The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy and represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs).
Animating Sign Language in the Real Time
- Computer Science
- 2002
The paper presents selected problems of visualizing animated sign language sentences in real time, a part of a system for translation of texts into the sign language, using Szczepankowski’s gestographic notation.
Gesture recognition for virtual reality applications using data gloves and neural networks
- Computer ScienceIJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
- 1999
A data glove is explored as the input device for the use of hand gestures as a means of human-computer interactions for virtual reality applications, and the performance of different neural network models, such as backpropagation and radial-basis functions, are compared.
Real-time gesture recognition using deterministic boosting
- Computer ScienceBMVC
- 2002
A gesture recognition system which can reliably recognize single-hand gestures in real time on a 600Mhz notebook computer is described and is demonstrated controlling a windowed operating system, editing a document and performing file-system operations with extremely low error rates over long time periods.
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
- Computer ScienceIEEE Trans. Pattern Anal. Mach. Intell.
- 1998
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first…
Robust Face Detection and Japanese Sign Language Hand Posture Recognition for Human-Computer Interaction in an “ Intelligent ” Room †
- Computer Science
- 2002
A system for the detection of human faces and for the classification of hand postures of the Japanese Sign Language in color images inside an "intelligent" room is presented to contribute to the implementation of meaningful humanmachine interactions in a room that is in the process of establishing, the “percept-room”, mainly for welfare applications.
A machine translation system from English to American Sign Language
- Computer ScienceAMTA
- 2000
This paper prototype a machine translation system from English to American Sign Language (ASL), taking into account not only linguistic but also visual and spatial information associated with ASL signs.
An Image Processing Technique for the Translation of ASL Finger-Spelling to Digital Audio and Text
- Computer Science
- 2005
This work is phase one of a broader project, The Sign2 Project, that is focused on a complete technological approach to the translation of ASL to digital audio and/or text.
A Tutor for Teaching English as a Second Language for Deaf Users of American Sign Language
- Computer Science
- 1997
The particular difficulties faced by the deaf writer learning English are addressed and a system with the capabilities of accepting input via an essay written by a user, analyzing that essay for errors, and then engaging the user in tutorial dialogue aimed toward improving his/her overall literacy is created.
Finding Relevant Image Content for mobile Sign Language Recognition
- Computer Science
- 2001
The problem of finding relevant information in single-view image sequences is tackled by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images.