Exploiting Attention to Reveal Shortcomings in Memory Models
@inproceedings{Burns2018ExploitingAT, title={Exploiting Attention to Reveal Shortcomings in Memory Models}, author={Kaylee Burns and A. Nematzadeh and E. Grant and A. Gopnik and T. Griffiths}, booktitle={BlackboxNLP@EMNLP}, year={2018} }
The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments… CONTINUE READING
3 Citations
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
- Computer Science, Mathematics
- Nat. Lang. Eng.
- 2019
- 22
- PDF
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
- Computer Science
- ACL
- 2020
- 4
- PDF
References
SHOWING 1-9 OF 9 REFERENCES
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
- Computer Science, Mathematics
- ICLR
- 2016
- 779
- PDF