Continuous Learning in Neural Machine Translation using Bilingual Dictionaries

@article{Niehues2021ContinuousLI,
  title={Continuous Learning in Neural Machine Translation using Bilingual Dictionaries},
  author={Jan Niehues},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.06558}
}
  • J. Niehues
  • Published 12 February 2021
  • Computer Science
  • ArXiv
While recent advances in deep learning led to significant improvements in machine translation, neural machine translation is often still not able to continuously adapt to the environment. For humans, as well as for machine translation, bilingual dictionaries are a promising knowledge source to continuously integrate new knowledge. However, their exploitation poses several challenges: The system needs to be able to perform one-shot learning as well as model the morphology of source and target… 

Figures and Tables from this paper

Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey

This work surveys approaches to domain adaptation for NMT, particularly where a system may need to translate across multiple domains, and divides techniques into those revolving around data selection or generation, model architecture, parameter adaptation procedure, and inference procedure.

A Template-based Method for Constrained Neural Machine Translation

This work proposes a template-based method that can yield results with high translation quality and match accuracy and the inference speed of the method is comparable with unconstrained NMT models.

Compiling a Highly Accurate Bilingual Lexicon by Combining Different Approaches

This work performs a rigorous manual evaluation of four different methods: word alignments on different types of bilingual data, pivoting, machine translation and cross-lingual word embeddings, showing how multiple different combinations generate lists with well over 90% acceptance rate.

Cost-Effective Training in Low-Resource Neural Machine Translation

A cost-effective training procedure to improve the performance of NMT models and a new hybrid data-driven approach, which samples sentences that are di- 028 verse from the labelled data and also most sim- 029 ilar to unlabelled data.

Modeling Target-side Inflection in Placeholder Translation

A novel method of placeholder translation that can inflect specified terms according to the grammatical construction of the output sentence is proposed and evaluated with a Japanese-to-English translation task in the scientific writing domain, showing that the model can incorporate specified terms in the correct form more successfully than other comparable models.

Findings of the WMT Shared Task on Machine Translation Using Terminologies

This work introduces a benchmark for evaluating the quality and consistency of terminology translation, focusing on the medical domain for five language pairs: English to French, Chinese, Russian, and Korean, as well as Czech to German.

Dynamic Terminology Integration for COVID-19 and Other Emerging Domains

Tilde MT systems that are capable of dynamic terminology integration at the time of translation are described, which achieve up to 94% COVID-19 term use accuracy on the test set of the EN-FR language pair without having access to any form of in-domain information during system training.

References

SHOWING 1-10 OF 28 REFERENCES

Bridging Neural Machine Translation and Bilingual Dictionaries

The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena.

Towards one-shot learning for rare-word translation with external experts

The benefit of the proposed framework in outof domain translation scenarios with only lexical resources is demonstrated, improving more than 1.0 BLEU point in both translation directions English-Spanish and German-English.

Learning Efficient Lexically-Constrained Neural Machine Translation with External Memory

This paper proposes to learn the ability of lexically-constrained translation with external memory, which can overcome the above mentioned disadvantages of high computational complexity and hard beam search which generates unexpected translations.

Continuous Learning from Human Post-Edits for Neural Machine Translation

This work explores several online learning strategies to stepwise fine-tune an existing model to the incoming post-edits in the neural machine translation framework and shows significant improvements over the use of static models.

Stanford Neural Machine Translation Systems for Spoken Language Domains

This work further explores the effectiveness of NMT in spoken language domains by participating in the MT track of the IWSLT 2015 and demonstrates that using an existing NMT framework can achieve competitive results in the aforementioned scenarios when translating from English to German and Vietnamese.

Incorporating Discrete Translation Lexicons into Neural Machine Translation

A method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on is described.

Pre-Translation for Neural Machine Translation

This work used phrase-based machine translation to pre-translate the input into the target language and analyzed the influence of the quality of the initial system on the final result.

Domain Control for Neural Machine Translation

A new technique for neural machine translation (NMT) that is performed at runtime using a unique neural network covering multiple domains is proposed, called domain control, which shows quality improvements when compared to dedicated domains translating on any of the covered domains and even on out-of-domain data.

Guiding Neural Machine Translation Decoding with External Knowledge

This work proposes a “guide” mechanism that enhances an existing NMT decoder with the ability to prioritize and adequately handle translation options presented in the form of XML annotations of source words.

Neural Machine Translation of Rare Words with Subword Units

This paper introduces a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units, and empirically shows that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.3 BLEU.