Exploiting Attention to Reveal Shortcomings in Memory Models

@inproceedings{Burns2018ExploitingAT,
  title={Exploiting Attention to Reveal Shortcomings in Memory Models},
  author={Kaylee Burns and A. Nematzadeh and E. Grant and A. Gopnik and T. Griffiths},
  booktitle={BlackboxNLP@EMNLP},
  year={2018}
}
  • Kaylee Burns, A. Nematzadeh, +2 authors T. Griffiths
  • Published in BlackboxNLP@EMNLP 2018
  • Computer Science
  • The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments… CONTINUE READING
    3 Citations
    Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
    • 22
    • PDF
    What Does BERT Look At? An Analysis of BERT's Attention
    • 342
    • PDF
    Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
    • 4
    • PDF

    References

    SHOWING 1-9 OF 9 REFERENCES
    Evaluating Theory of Mind in Question Answering
    • 11
    • PDF
    End-To-End Memory Networks
    • 1,621
    • Highly Influential
    • PDF
    Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
    • 779
    • PDF
    A simple neural network module for relational reasoning
    • 860
    • PDF
    Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
    • 100
    • PDF
    Tracking the World State with Recurrent Entity Networks
    • 147
    • PDF
    The Winograd Schema Challenge
    • 445
    • PDF
    Theory-of-Mind Development: Retrospect and Prospect
    • 367
    • PDF
    Does the autistic child have a “theory of mind” ?
    • 6,207
    • PDF