DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

@article{Fang2017DeepASLEU,
  title={DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation},
  author={Biyi Fang and Jillian Co and Mi Zhang},
  journal={Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems},
  year={2017}
}
  • Biyi FangJ. CoMi Zhang
  • Published 6 November 2017
  • Computer Science
  • Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems
There is an undeniable communication barrier between deaf people and people with normal hearing ability. [] Key Method It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7, 306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL…

MyoSign: enabling end-to-end sign language recognition with wearables

MyoSign is presented, a deep learning based system that enables end-to-end American Sign Language (ASL) recognition at both word and sentence levels and uses a lightweight wearable device which can provide inertial and electromyography signals to non-intrusively capture signs.

Deep Learning Methods for Sign Language Translation

From the analysis, the transformer model combined with input embeddings from ResNet50 or pose-based landmark features outperformed all the other sequence-to-sequence models by achieving higher BLEU2-BLEU4 scores when applied to the controlled and constrained GSL benchmark dataset.

Word-level Sign Language Recognition Using Linguistic Adaptation of 77 GHz FMCW Radar Data

This paper investigates the efficacy of RF sensors for word-level ASL recognition in support of human-computer interfaces designed for deaf or hard-of-hearing individuals and adversarial domain adaptation results are compared with those attained by directly synthesizing ASL signs using generative adversarial networks (GANs).

Achieving Real-Time Sign Language Translation Using a Smartphone's True Depth Images

The preliminary efforts in designing a mobile device-based sign language translation system using depth-only images and performs image processing on the smartphone-collected depth images to emphasize the subject's hand and upper body gestures and exploits a convolutional neural network for feature extraction.

GASLA: Enhancing the Applicability of Sign Language Translation

GASLA, the sentence-level sensing data can be generated from the word-level data automatically, which can be then applied to train ASL systems, and could become highly lightweight in both initial setup and future new-sentence addition.

WearSign: Pushing the Limit of Sign Language Translation Using Inertial and EMG Wearables

This paper approaches SLT as a spatio-temporal machine translation task and proposes a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts and includes the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping.

Sentence-Level Sign Language Recognition Framework

Two solutions to sentence-level SLR are presented and Connectionist Temporal Classification (CTC) has been used as the classi-er level of both models to avoid pre-segmenting the sentences into individual words.

Continuous sign language recognition using isolated signs data and deep transfer learning

Based on the hypothesis that generic features learned from isolated signs will enhance the classification of continuous sentence sign data, a novel transfer learning framework is proposed, wherein last few layers of the pre-trained network are retrained using limited amount of labelled sentence data.

American Sign Language Translation Using Wearable Inertial and Electromyography Sensors for Tracking Hand Movements and Facial Expressions

A novel American sign language (ASL) translation method based on wearable sensors was proposed, which leveraged inertial sensors to capture signs and surface electromyography sensors to detect facial expressions to extract features from input signals.

Application of Transfer Learning to Sign Language Recognition using an Inflated 3D Deep Convolutional Neural Network

How effectively transfer learning can be applied to isolated SLR is investigated using an inflated 3D convolutional neural network as the deep learning architecture and the accuracy performances of the networks applying transfer learning increased substantially by up to 21% as compared to the baseline models that were not pre-trained on the MS-ASL dataset.
...

Personalized speech recognition on mobile devices

We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5

LipNet: Sentence-level Lipreading

To the best of the knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model.

LipNet: End-to-End Sentence-level Lipreading

This work presents LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end.

Recent developments in visual sign language recognition

A comprehensive concept for robust visual sign language recognition is described, which represents the recent developments in this field and aims for signer-independent operation and utilizes a single video camera for data acquisition to ensure user-friendliness.

Speech recognition with deep recurrent neural networks

This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs.

Sequence to Sequence Learning with Neural Networks

This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

A novel approach to American Sign Language (ASL) phrase verification using reversed signing

The results show that for the new method the alignment selected for signs in a test phrase has a significantly better match to the ground truth when compared to the traditional approach.

Glove-based hand gesture recognition sign language translator using capacitive touch sensor

A gesture recognition glove based on charge-transfer touch sensors for the translation of the American Sign Language is presented, expected to bridge the communication gap between the hearing and speech impaired and members of the general public.

A Sign-Component-Based Framework for Chinese Sign Language Recognition Using Accelerometer and sEMG Data

A framework for automatic Chinese SLR at the component level, where three basic components of sign subwords, namely the hand shape, orientation, and movement, are further modeled and the corresponding component classifiers are learned.

Sign Language Recognition and Translation with Kinect

This demo will show the primary efforts on sign language recognition and translation with Kinect, and 3D motion trajectory of each sign language vocabulary is aligned and matched between probe and gallery to get the recognized result.