• Corpus ID: 233296329

SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning

@inproceedings{Chan2021SalKGLF,
  title={SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning},
  author={Aaron Chan and Soumya Sanyal and Bo Long and Jiashu Xu and Tanishq Gupta and Xiang Ren},
  booktitle={NeurIPS},
  year={2021}
}
Augmenting pre-trained language models with knowledge graphs (KGs) has achieved success on various commonsense reasoning tasks. Although some works have attempted to explain the behavior of such KG-augmented models by indicating which KG inputs are salient (i.e., important for the model’s prediction), it is not always clear how these explanations should be used to make the model better. In this paper, we explore whether KG explanations can be used as supervision for teaching these KG-augmented… 
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
TLDR
UNIREX, a flexible learning framework which generalizes rationale extractor optimization as follows, and introduces the Normalized Relative Gain (NRG) metric, which finds that UNIREX-trained rationale extractors’ faithfulness can even generalize to unseen datasets and tasks.
ER-TEST Evaluating Explanation Regularization Methods for NLP Models
TLDR
Through ER-TEST, it is shown that ER has little impact on ID performance, but can yield large gains on OOD performance w.r.t. (1)-(3), and that the best ER criterion is task-dependent, while ER can improve Ood performance even with limited human rationales.
Knowledge-Augmented Methods for Natural Language Processing
TLDR
This tutorial introduces the key steps in integrating knowledge into NLP, including knowledge grounding from text, knowledge representation and fusing, and introduces recent state-of-the-art applications in fusing knowledge into language understanding, language generation and commonsense reasoning.
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
TLDR
A novel gradient-based method that analyzes self-attention units and identifies the input elements that explain the model's prediction the best, and obtains significant improvements over state-of-the-art alternatives.

References

SHOWING 1-10 OF 90 REFERENCES
Learning Contextualized Knowledge Structures for Commonsense Reasoning
TLDR
A novel neural-symbolic model is presented, named Hybrid Graph Network (HGN), which jointly generates feature representations for new triples, determines the relevance of the triples to the reasoning context, and learns graph module parameters for encoding the relational information.
KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning
TLDR
A novel knowledge graphaugmented pre-trained language generation model KG-BART is proposed, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output and can leverage the graph attention to aggregate the rich concept semantics that enhances the model generalization on unseen concept sets.
Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering
TLDR
This paper augments a general commonsense QA framework with a knowledgeable path generator by extrapolating over existing paths in a KG with a state-of-the-art language model, which learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path.
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework.
KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning
TLDR
This paper proposes a textual inference framework for answering commonsense questions, which effectively utilizes external, structured commonsense knowledge graphs to perform explainable inferences.
Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
TLDR
It is demonstrated that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs which maintain the downstream performance of the original KG while significantly deviating from the original semantics and structure.
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
TLDR
This work proposes a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) through two key innovations: relevance scoring and joint reasoning.
Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering
TLDR
This paper performs a survey of recent commonsense QA methods and provides a systematic analysis of popular knowledge resources and knowledge-integration methods, across benchmarks from multiple commonsense datasets, and shows that attention-based injection seems to be a preferable choice for knowledge integration.
Commonsense Knowledge Mining from Pretrained Models
TLDR
This work develops a method for generating commonsense knowledge using a large, pre-trained bidirectional language model that can be used to rank a triple’s validity by the estimated pointwise mutual information between the two entities.
...
...