Corpus ID: 224818197

Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation

@article{Voita2020AnalyzingTS,
  title={Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation},
  author={Elena Voita and Rico Sennrich and Ivan Titov},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.10907}
}
In Neural Machine Translation (and, more generally, conditional language modeling), the generation of a target token is influenced by two types of context: the source and the prefix of the target sequence. While many attempts to understand the internal workings of NMT models have been made, none of them explicitly evaluates relative source and target contributions to a generation decision. We argue that this relative contribution can be evaluated by adopting a variant of Layerwise Relevance… Expand
2 Citations
On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification
  • PDF
The Curious Case of Hallucinations in Neural Machine Translation
  • PDF

References

SHOWING 1-10 OF 44 REFERENCES
Context Gates for Neural Machine Translation
  • 90
  • PDF
Neural Machine Translation with Decoding History Enhanced Attention
  • 6
  • PDF
An Analysis of Encoder Representations in Transformer-Based Machine Translation
  • 104
  • PDF
Towards Understanding Neural Machine Translation with Word Importance
  • 15
  • PDF
Analyzing Uncertainty in Neural Machine Translation
  • 101
  • PDF
An Analysis of Source Context Dependency in Neural Machine Translation
  • 4
What do Neural Machine Translation Models Learn about Morphology?
  • 215
  • PDF
Using Monolingual Data in Neural Machine Translation: a Systematic Study
  • 47
  • PDF
Regularized Context Gates on Transformer for Machine Translation
  • 3
  • PDF
...
1
2
3
4
5
...