Automatically Extracting Challenge Sets for Non local Phenomena in Neural Machine Translation

  title={Automatically Extracting Challenge Sets for Non local Phenomena in Neural Machine Translation},
  author={Leshem Choshen and Omri Abend},
  • Leshem Choshen, Omri Abend
  • Published 2019
  • Computer Science
  • ArXiv
  • We show that the state-of-the-art Transformer MT model is not biased towards monotonic reordering (unlike previous recurrent neural network models), but that nevertheless, long-distance dependencies remain a challenge for the model. [...] Key Result The extracted sets are large enough to allow reliable automatic evaluation, which makes the proposed approach a scalable and practical solution for evaluating MT performance on the long-tail of syntactic phenomena.Expand Abstract


    A Challenge Set for French -> English Machine Translation
    • 4
    • Highly Influential
    • PDF
    A Challenge Set Approach to Evaluating Machine Translation
    • 87
    • PDF
    HUME: Human UCCA-Based Evaluation of Machine Translation
    • 26
    • PDF
    Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
    • 401
    • Highly Influential
    • PDF
    Multi-source transformer with combined losses for automatic post editing
    • 18
    • PDF
    Linguistic Knowledge and Transferability of Contextual Representations
    • 202
    • PDF
    Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
    • 878
    • PDF