Tagged Back-Translation

@article{Caswell2019TaggedB,
  title={Tagged Back-Translation},
  author={Isaac Caswell and Ciprian Chelba and David Grangier},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.06442}
}
Recent work in Neural Machine Translation (NMT) has shown significant quality gains from noised-beam decoding during back-translation, a method to generate synthetic parallel data. [] Key Method We propose a simpler alternative to noising techniques, consisting of tagging back-translated source sentences with an extra token. Our results on WMT outperform noised back-translation in English-Romanian and match performance on English-German, re-defining state-of-the-art in the former.

Figures and Tables from this paper

Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back-Translation
TLDR
This paper proposes new data augmentation methods to extend limited noisy data and further improve NMT robustness to noise while keeping the models small and explores the effect of utilizing noise from external data in the form of speech transcripts and shows that it could help robustness.
Domain, Translationese and Noise in Synthetic Data for Neural Machine Translation
TLDR
It is shown that forward translation delivers superior gains in terms of BLEU on sentences that were originally in the source language, complementing previous studies which show large improvements with back-translation.
Filtering Back-Translated Data in Unsupervised Neural Machine Translation
TLDR
This paper proposes an approach to filter back-translated data as part of the training process of unsupervised NMT, and gives more weight to good pseudo parallel sentence pairs in the back-translation phase.
Tagged Back-translation Revisited: Why Does It Really Work?
TLDR
It is concluded that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown, and that NMT systems trained in low-resource settings are much less vulnerable to overfit back-Translations.
Tag-less Back-Translation
TLDR
The approach - tag-less back-translation - the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training.
Iterative Batch Back-Translation for Neural Machine Translation: A Conceptual Model
TLDR
This work proposes the iterative batch back-translation that is aimed at enhancing the standard iterative back- translation and enabling the efficient utilization of more monolingual data.
Using Self-Training to Improve Back-Translation in Low Resource Neural Machine Translation
TLDR
This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique, and was shown to improve baseline low resource IWSLT'14 English-German and IW SLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively.
HintedBT: Augmenting Back-Translation with Quality and Transliteration Hints
TLDR
HintedBT—a family of techniques which provides hints (through tags) to the encoder and decoder which significantly improves translation quality and leads to state-of-the-art performance in all three language pairs in corresponding bilingual settings.
Combination of Neural Machine Translation Systems at WMT20
TLDR
This paper presents neural machine translation systems and their combination built for the WMT20 English-Polish and Japanese->English translation tasks and reveals that the presence of translationese texts in the validation data led them to take decisions in building NMT systems that were not optimal to obtain the best results on the test data.
On The Evaluation of Machine Translation SystemsTrained With Back-Translation
TLDR
Empirical evidence is provided to support the view that back-translation is preferred by humans because it produces more fluent outputs and to recommend complementing BLEU with a language model score to measure fluency.
...
...

References

SHOWING 1-10 OF 50 REFERENCES
Understanding Back-Translation at Scale
TLDR
This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences, finding that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective.
Bi-Directional Neural Machine Translation with Synthetic Parallel Data
TLDR
A novel technique that combines back-translation and multilingual NMT to improve performance in low-resource and out-of-domain scenarios, and can reduce training and deployment costs significantly compared to uni-directional models.
Improved Neural Machine Translation with SMT Features
TLDR
The proposed method significantly improves the translation quality of the state-ofthe-art NMT system on Chinese-to-English translation tasks and incorporates statistical machine translation (SMT) features, such as a translation model and an n-gram language model, with the NMT model under the log-linear framework.
On Using Monolingual Corpora in Neural Machine Translation
TLDR
This work investigates how to leverage abundant monolingual corpora for neural machine translation to improve results for En-Fr and En-De translation and extends to high resource languages such as Cs-En and De-En.
Phrase-Based & Neural Unsupervised Machine Translation
TLDR
This work investigates how to learn to translate when having access to only large monolingual corpora in each language, and proposes two model variants, a neural and a phrase-based model, which are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters.
Improving Neural Machine Translation Models with Monolingual Data
TLDR
This work pairs monolingual training data with an automatic back-translation, and can treat it as additional parallel training data, and obtains substantial improvements on the WMT 15 task English German, and for the low-resourced IWSLT 14 task Turkish->English.
Unsupervised Statistical Machine Translation
TLDR
This paper proposes an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems, and profits from the modular architecture of SMT.
LIUM Machine Translation Systems for WMT17 News Translation Task
This paper describes LIUM submissions to WMT17 News Translation Task for English-German, English-Turkish, English-Czech and English-Latvian language pairs. We train BPE-based attentive Neural Machine
Copied Monolingual Data Improves Low-Resource Neural Machine Translation
We train a neural machine translation (NMT) system to both translate sourcelanguage text and copy target-language text, thereby exploiting monolingual corpora in the target language. Specifically, we
On integrating a language model into neural machine translation
...
...