Visualizing and Understanding Neural Machine Translation

@inproceedings{Ding2017VisualizingAU,
  title={Visualizing and Understanding Neural Machine Translation},
  author={Yanzhuo Ding and Yang Liu and Huanbo Luan and Maosong Sun},
  booktitle={ACL},
  year={2017}
}
While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoderdecoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and… CONTINUE READING
Highly Cited
This paper has 34 citations. REVIEW CITATIONS

Citations

Publications citing this paper.

References

Publications referenced by this paper.
Showing 1-10 of 19 references

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) • 2015
View 4 Excerpts
Highly Influenced

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

2014 IEEE Conference on Computer Vision and Pattern Recognition • 2014
View 4 Excerpts
Highly Influenced

Similar Papers

Loading similar papers…