What do Neural Machine Translation Models Learn about Morphology?

@inproceedings{Belinkov2017WhatDN,
  title={What do Neural Machine Translation Models Learn about Morphology?},
  author={Yonatan Belinkov and Nadir Durrani and Fahim Dalvi and Hassan Sajjad and James R. Glass},
  booktitle={ACL},
  year={2017}
}
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 93 CITATIONS

Understanding Learning Dynamics Of Language Models with SVCCA

  • NAACL-HLT
  • 2018
VIEW 5 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

A Structural Probe for Finding Syntax in Word Representations

VIEW 4 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Understanding and Improving Hidden Representation for Neural Machine Translation

VIEW 5 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

An Analysis of Encoder Representations in Transformer-Based Machine Translation

  • BlackboxNLP@EMNLP
  • 2018
VIEW 3 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

LANGUAGE MODELING TEACHES YOU MORE SYNTAX

  • 2018
VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2016
2019

CITATION STATISTICS

  • 19 Highly Influenced Citations

  • Averaged 30 Citations per year from 2017 through 2019

References

Publications referenced by this paper.