Corpus ID: 39237239

QLIBRAS : A novel database for grammatical facial expressions in Brazilian Sign Language

@inproceedings{Silva2017QLIBRASA,
  title={QLIBRAS : A novel database for grammatical facial expressions in Brazilian Sign Language},
  author={Emely Puj{\'o}lli da Silva and Paula Dornhofer Paro Costa},
  year={2017}
}
Individuals with some degree of hearing impairment typically face difficulties in communicating with hearing individuals and during the acquisition of reading and writing skills. Sign language (SL) is a language structured in gestures that, as any other human language, present variations around the world and that is widely adopted by the deaf. Automatic Sign Language Recognition (ASLR) technology aims to translate sign language gestures into written or spoken sentences of a target language with… Expand

Figures from this paper

Recognition of Affective and Grammatical Facial Expressions: A Study for Brazilian Sign Language
TLDR
An approach to facial recognition for sign language using the Facial Action Coding System (FACS) and two convolutional neural networks, a standard CNN and hybrid CNN+LSTM, for AU recognition are evaluated. Expand
Towards a Tool to Translate Brazilian Sign Language (Libras) to Brazilian Portuguese and Improve Communication with Deaf
TLDR
This work aims to help deaf to access the customer services of a big computer manufacturer by providing a way for them to communicate in Libras while the call center attendant receives a translation in Brazilian Portuguese. Expand

References

SHOWING 1-10 OF 15 REFERENCES
Recognition of Non-Manual Expressions in Brazilian Sign Language
Individuals with some degree of hearing impairment typically face difficulties in communicating with hearing individuals and during the acquisition of reading and writing skills. Sign language (SL)Expand
A survey on mouth modeling and analysis for Sign Language recognition
TLDR
The first survey on mouth non-manuals in ASLR is presented, showing why mouth motion is important in SL and the relevant techniques that exist within ASLR, and surveying relevant techniques from the areas of automatic mouth expression and visual speech recognition which can be applied to the task. Expand
Grammatical Facial Expressions Recognition with Machine Learning
TLDR
The recognition of Grammatical Facial Expressions in the Brazilian Sign Language is outlined, which has captured nine types of GFEs using a KinectTM sensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique. Expand
Sign Language Recognition Model Combining Non-manual Markers and Handshapes
TLDR
A sign language recognition model is presented which takes advantage of the natural user interfaces (NUI) and a classification algorithm (support vector machines) to enhance the sign language expressivity recognition. Expand
Brazilian Sign Language Recognition Using Kinect
TLDR
A method to recognize Brazilian Sign Language (Libras) using Kinect with dynamic time warping–nearest neighbor (DTW-kNN) classifier using the leave-one-out cross-validation strategy reported outstanding results. Expand
Sign Language Recognition
This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a precis of sign linguistics and theirExpand
Reconhecimento automático de expressões faciais gramaticais na língua brasileira de sinais
FREITAS, Fernando de Almeida. Automatic recognition of Grammatical Facial Expressions from Brazilian Sign Language (Libras). 2015. 112 p. Dissertation (Master of Science) – School of Arts, SciencesExpand
Robust sign language recognition by combining manual and non-manual features based on conditional random field and support vector machine
TLDR
A new method for recognizing manual signals and facial expressions as non-manual signals and can accurately recognize the sign language at an 84% rate based on utterance data is proposed. Expand
American Sign Language: The Phonological Base
This paper has the ambitious goal of outlining the phonological structures and processes we have analyzed in American Sign Language (ASL). In order to do this we have divided the paper into fiveExpand
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The firstExpand
...
1
2
...