ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning

@inproceedings{Saha2021ExplaGraphsAE,
  title={ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning},
  author={Swarnadeep Saha and Prateek Yadav and Lisa Bauer and Mohit Bansal},
  booktitle={EMNLP},
  year={2021}
}
Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context. Discriminative tasks are limiting because they fail to adequately evaluate the model’s ability to reason and explain predictions with underlying commonsense knowledge. They also allow such models to use reasoning shortcuts and not be “right for the right reasons”. In this work, we present ExplaGraphs, a new generative and structured commonsense… 
IRAC: A Domain-Specific Annotated Corpus of Implicit Reasoning in Arguments
TLDR
This work creates the first domain-specific resource of implicit reasonings annotated for a wide range of arguments, which can be leveraged to empower machines with better implicit reasoning generation ability and shows the feasibility of creating a such a corpus at a reasonable cost and high-quality.
Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning
TLDR
This work studies pre-trained language models that generate explanation graphs in an end-to-end manner and analyzes their ability to learn the structural constraints and semantics of such graphs and proposes simple yet effective ways of graph perturbations via node and edge edit operations that lead to structurally and semantically positive and negative graphs.
Annotating Implicit Reasoning in Arguments with Causal Links
TLDR
This work proposes a semi-structured template to represent argumentation knowledge that explicates the implicit reasoning in arguments via causality and creates a novel two-phase annotation process with simplified guidelines and shows how to collect and filter high quality implicit reasonings via crowdsourcing.
COPA-SSE: Semi-structured Explanations for Commonsense Reasoning
TLDR
This work presents Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9,747 semi-structured, English common sense explanations for Choice of Plausible Alternatives questions, which are geared towards commonsense reasoners operating on knowledge graphs and serve as a starting point for ongoing work on improving such systems.
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models
TLDR
This work introduces WinoGAViL: an online game to collect vision-and-language associations, used as a dynamic benchmark to evaluate state-of-the-art models and indicates that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
LPAttack: A Feasible Annotation Scheme for Capturing Logic Pattern of Attacks in Arguments
In argumentative discourse, persuasion is often achieved by refuting or attacking others arguments. Attacking is not always straightforward and often comprise complex rhetorical moves such that
Generative Retrieval for Long Sequences
TLDR
This paper uses an encoder-decoder model to memorize the target corpus in a generative manner and then uses it on query-to-passage generation, conjecture that generative retrieval is complementary to traditional retrieval, as it is conjecture that an ensemble of both outperforms homogeneous ensembles.
TYPIC: A Corpus of Template-Based Diagnostic Comments on Argumentation
TLDR
This paper defines three criteria that a template set should satisfy: expressiveness, informativeness, and uniqueness, and verified the feasibility of creating a templateSet that satisfies these criteria as a first trial, and presents a formulation of the task of providing specific diagnostic comments as template selection and slot filling.
Towards an Interpretable Approach to Classify and Summarize Crisis Events from Microblogs
TLDR
An interpretable classification-summarization framework that first classifies tweets into different disaster-related categories and then summarizes those tweets, which achieves 5-25%) improvement in terms of ROUGE-1 F-score over most state-of-the-art approaches.
Generative Multi-hop Retrieval
TLDR
An encoder-decoder model is proposed that performs multi-hop retrieval by simply generating the entire text sequences of the retrieval targets, which means the query and the documents interact in the language model’s parametric space rather than L2 or inner product space as in the bi-encoder approach.
...
...

References

SHOWING 1-10 OF 77 REFERENCES
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework.
WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference
TLDR
A corpus of explanations for standardized science exams, a recent challenge task for question answering, is presented and an explanation-centered tablestore is provided, a collection of semi-structured tables that contain the knowledge to construct these elementary science explanations.
TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration
TLDR
The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semi-structured tables.
multiPRover: Generating Multiple Proofs for Improved Interpretability in Rule Reasoning
TLDR
This work addresses a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases by proposing two variants of a proof-set generation model, multiPRover and Iterative-multiPRover.
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
TLDR
This paper investigates multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and trains generative models capable of composing explanatory rationales for unseen instances.
Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering
TLDR
A delexicalized chain representation in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains is explored, finding that generalized chains maintain performance while also being more robust to certain perturbations.
Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations
TLDR
A novel logical reasoner called Braid is devised, that supports probabilistic rules, and uses the notion of custom unification functions and dynamic rule generation to overcome the brittle matching and knowledge-gap problem prevalent in traditional reasoners.
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
TLDR
This study presents RationaleˆVT Transformer, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs, and finds that integration of richer semantic and pragmatic visual features improves visual fidelity of rationales.
PRover: Proof Generation for Interpretable Reasoning over Rules
TLDR
This work proposes PROVER, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs, and learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.
Abductive Commonsense Reasoning
TLDR
This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language.
...
...