Explain Yourself! Leveraging Language Models for Commonsense Reasoning

  title={Explain Yourself! Leveraging Language Models for Commonsense Reasoning},
  author={Nazneen Fatema Rajani and Bryan McCann and Caiming Xiong and Richard Socher},
Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • CAGE improves the state-of-the-art by 10% on the challenging CommonsenseQA task.
  • The two-phase CAGE framework obtains state-of-the-art results outperforming the best reported baseline by 10% and also produces explanations to justify its predictions.
  • In summary, we introduce a new Common Sense Explanations (CoS-E) dataset to study neural commonsense reasoning and provide a new method, CAGE for automatically generating explanations that achieve a state-of-the-art accuracy of approximately 65% on CQA v1.0.
  • In Section 5, we show that this approach outperforms the reported state-of-the-art on CQA by 10%.
  • We find that this approach outperforms the current best model by 6% and also produces interestingly good quality explanations as discussed in Section 5.
  • On the other hand, using CAGE-reasoning resulted in a gain of 10% accuracy over the previous state-of-the-art.


Publications referenced by this paper.

VQA: Visual Question Answering


Similar Papers