• Corpus ID: 234742622

Data Augmentation for Sign Language Gloss Translation

@inproceedings{Moryossef2021DataAF,
  title={Data Augmentation for Sign Language Gloss Translation},
  author={Amit Moryossef and Kayo Yin and Graham Neubig and Yoav Goldberg},
  booktitle={MTSUMMIT},
  year={2021}
}
Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss to-text translation, where a gloss is a sequence of transcribed spoken-language words in the order in which they are signed. We focus here on gloss-to-text translation, which we treat as a low-resource neural machine translation (NMT) problem. However, unlike traditional low resource NMT, gloss-to-text translation differs because gloss-text pairs often have a higher lexical overlap and lower syntactic… 

Figures and Tables from this paper

Using Neural Machine Translation Methods for Sign Language Translation
TLDR
This work presents one of the first works that include experiments on both parallel corpora of the German Sign Language (PHOENIX14T and the Public DGS Corpus), and experiment with two NMT architectures with optimization of their hyperparameters, several tokenization methods and two data augmentation techniques (back-translation and paraphrasing).
Syntax-aware Transformers for Neural Machine Translation: The Case of Text to Sign Gloss Translation
TLDR
This paper enrichs a Transformer-based architecture aggregating syntactic information extracted from a dependency parser to word-embeddings with a syntax-aware model and tests it on a well-known dataset showing that the syntax- aware model obtains performance gains in terms of MT evaluation metrics.
Machine Translation from Signed to Spoken Languages: State of the Art and Challenges
TLDR
A high-level introduction to sign language linguistics and machine translation is given and a systematic literature review is presented to illustrate the state of the art in the domain and several challenges for future research are laid out.
Modeling Intensification for Sign Language Generation: A Computational Approach
TLDR
This paper aims to improve the prosody in generated sign languages by modeling intensification in a data-driven manner, and presents different strategies grounded in linguistics of sign language that inform how intensity modifiers can be represented in gloss annotations.
Signing at Scale: Learning to Co-Articulate Signs for Large-Scale Photo-Realistic Sign Language Production
TLDR
This work proposes a novel Frame Selection Network ( FS-N ET) that improves the temporal alignment of interpolated dictionary signs to continuous signing sequences, and proposes S IGN GAN, a pose-conditioned human synthesis model that produces photo-realistic sign language videos direct from skeleton pose.
Including Signed Languages in Natural Language Processing
TLDR
This position paper calls on the NLP community to include signed languages as a research area with high social and scientific impact and urges the adoption of an efficient tokenization method, development of linguistically-informed models, and the inclusion of local signed language communities as an active and leading voice in the direction of research.
Explore More Guidance: A Task-aware Instruction Network for Sign Language Translation Enhanced with Data Augmentation
TLDR
This work proposes a task-aware instruction network, namely TIN-SLT, for sign language translation, by introducing the isntruc-tion module and the learning-based feature fuse strategy into a Transformer network, which outperforms former best solutions on two challenging benchmark datasets.

References

SHOWING 1-10 OF 38 REFERENCES
Neural Sign Language Translation
TLDR
This work formalizes SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge) and allows to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language.
Copied Monolingual Data Improves Low-Resource Neural Machine Translation
We train a neural machine translation (NMT) system to both translate sourcelanguage text and copy target-language text, thereby exploiting monolingual corpora in the target language. Specifically, we
Attention is All You Sign: Sign Language Translation with Transformers
TLDR
The findings reveal that end-to-end translation with predicted glosses outperforms translation on GT glosses and shows the potential for further improvement in SLT by either jointly training the SLR and translation systems or by revising the gloss annotation scheme.
Sign Language Transformers: Joint End-to-End Sign Language Recognition and Translation
TLDR
A novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation while being trainable in an end-to-end manner is introduced by using a Connectionist Temporal Classification (CTC) loss to bind the recognition and translation problems into a single unified architecture.
Neural Machine Translation of Rare Words with Subword Units
TLDR
This paper introduces a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units, and empirically shows that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.3 BLEU.
Understanding Back-Translation at Scale
TLDR
This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences, finding that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective.
Handling Syntactic Divergence in Low-resource Machine Translation
TLDR
This paper proposes a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision in neural machine translation.
Improving Neural Machine Translation Models with Monolingual Data
TLDR
This work pairs monolingual training data with an automatic back-translation, and can treat it as additional parallel training data, and obtains substantial improvements on the WMT 15 task English German, and for the low-resourced IWSLT 14 task Turkish->English.
Combining Bilingual and Comparable Corpora for Low Resource Machine Translation
TLDR
This work improves coverage by using bilingual lexicon induction techniques to learn new translations from comparable corpora and supplements the model’s feature space with translation scores estimated over comparable Corpora in order to improve accuracy.
English-ASL Gloss Parallel Corpus 2012: ASLG-PC12
TLDR
A novel algorithm is presented that transforms an English part-of-speech sentence to an ASL gloss and generates ASL sentences from the Gutenberg Project corpus that contains only English written texts.
...
1
2
3
4
...