Obtaining Faithful Interpretations from Compositional Neural Networks

@inproceedings{Subramanian2020ObtainingFI,
  title={Obtaining Faithful Interpretations from Compositional Neural Networks},
  author={Sanjay Subramanian and Ben Bogin and Nitish Gupta and Tomer Wolfson and Sameer Singh and Jonathan Berant and Matt Gardner},
  booktitle={ACL},
  year={2020}
}
Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture. However, prior work implicitly assumed that the structure of the network modules, describing the abstract reasoning process, provides a faithful explanation of the model’s reasoning; that is, that all modules perform their intended behaviour. In this… Expand
16 Citations

Figures and Tables from this paper

Measuring and Improving Faithfulness of Attention in Neural Machine Translation
  • PDF
Enriching a Model's Notion of Belief using a Persistent Memory
  • PDF
A Diagnostic Study of Explainability Techniques for Text Classification
  • 8
  • PDF
Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals
  • PDF
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
  • 3
  • PDF
Neuro-Symbolic VQA: A review from the perspective of AGI desiderata
  • PDF
A Survey on Explainability in Machine Reading Comprehension
  • 7
  • PDF
...
1
2
...

References

SHOWING 1-10 OF 34 REFERENCES
Explainable Neural Computation via Stack Neural Module Networks
  • 91
  • PDF
Neural Module Networks for Reasoning over Text
  • 31
  • PDF
Learning to Reason: End-to-End Module Networks for Visual Question Answering
  • 354
  • Highly Influential
  • PDF
Learning to Compose Neural Networks for Question Answering
  • 423
  • Highly Influential
  • PDF
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
  • 839
  • PDF
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  • 17,055
  • PDF
Attention is not not Explanation
  • 187
  • PDF
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering
  • 218
  • PDF
Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning
  • 29
  • Highly Influential
  • PDF
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
  • 54
  • PDF
...
1
2
3
4
...