• Corpus ID: 201058651

Abductive Commonsense Reasoning

@article{Bhagavatula2020AbductiveCR,
  title={Abductive Commonsense Reasoning},
  author={Chandra Bhagavatula and Ronan Le Bras and Chaitanya Malaviya and Keisuke Sakaguchi and Ari Holtzman and Hannah Rashkin and Doug Downey and Scott Yih and Yejin Choi},
  journal={ArXiv},
  year={2020},
  volume={abs/1908.05739}
}
Abductive reasoning is inference to the most plausible explanation. [...] Key Method We conceptualize a new task of Abductive NLI and introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, formulated as multiple choice questions for easy automatic evaluation. We establish comprehensive baseline performance on this task based on state-of-the-art NLI and language models, which leads to 68.9% accuracy, well below human performance (91.4%). Our analysis…Expand
Generating Hypothetical Events for Abductive Inference
TLDR
This work proposes a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by considering different possible events emerging from candidate hypotheses – events generated by LMI – and selecting the one that is most similar to the observed outcome.
LANGUAGE-BASED ABDUCTIVE REASONING
The abductive natural language inference task (αNLI) is proposed to infer the most plausible explanation between the cause and the event. In the αNLI task, two observations are given, and the most
Thinking Like a Skeptic: Defeasible Inference in Natural Language
TLDR
From Defeasible NLI, both a classification and generation task for defeasible inference are developed, and it is demonstrated that the generation task is much more challenging.
Social Commonsense Reasoning with Multi-Head Knowledge Attention
TLDR
This work proposes a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell, and is the first to demonstrate that a model that learns to perform counterfactual reasoning helps predicting the best explanation in an abductive reasoning task.
Interactive Model with Structural Loss for Language-based Abductive Reasoning
TLDR
It is argued that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations.
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning
TLDR
This work presents EXPLAGRAPHS, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction, and proposes a multi-level evaluation framework that checks for the structural and semantic correctness of the generated graphs and their degree of match with ground-truth graphs.
Natural Language Inference in Context - Investigating Contextual Reasoning over Long Texts
TLDR
ConTRoL is a new dataset for ConTextual Reasoning over Long texts, a passage-level NLI dataset with a focus on complex contextual reasoning types such as logical reasoning, derived from competitive selection and recruitment test for police recruitment with expert level quality.
L2R²: Leveraging Ranking for Abductive Reasoning
TLDR
A novel L2R2 approach is proposed under the learning-to-rank framework to evaluate the abductive reasoning ability of a learning system by switching to a ranking perspective that sorts the hypotheses in order of their plausibilities.
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
TLDR
This paper investigates multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and trains generative models capable of composing explanatory rationales for unseen instances.
Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards
TLDR
A systematic annotation methodology, named Explanation Entailment Verification (EEV), is proposed, to quantify the logical validity of human-annotated explanations, and confirms that the inferential properties of explanations are still poorly formalised and understood.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 61 REFERENCES
Interpretation as Abduction
TLDR
An approach to abductive inference, called “weighted abduction”, that has resulted in a significant simplification of how the problem of interpreting texts is conceptualized, can be combined with the older view of “parsing as deduction” to produce an elegant and thorough integration of syntax, semantics, and pragmatics.
Robust Textual Inference Via Learning and Abductive Reasoning
TLDR
This approach can be viewed as combining statistical machine learning and classical logical reasoning, in the hope of marrying the robustness and scalability of learning with the preciseness and elegance of logical theorem proving.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
One Hundred Challenge Problems for Logical Formalizations of Commonsense Psychology
TLDR
This work presents a new set of challenge problems for the logical formalization of commonsense knowledge, called TriangleCOPA, which is specifically designed to support the development of logic-based commonsense theories, via two means.
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
TLDR
This paper introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning, and proposes Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data.
e-SNLI: Natural Language Inference with Natural Language Explanations
TLDR
The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.
Annotation Artifacts in Natural Language Inference Data
TLDR
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes.
The Extraordinary Ordinary Powers of Abductive Reasoning
The psychology of cognition has been influenced by semiotic models of representation, but little work has been done relating semiotics and the process of cognition proper. In this paper, I argue that
Natural Logic for Textual Inference
TLDR
This paper presents the first use of a computational model of natural logic---a system of logical inference which operates over natural language---for textual inference, and provides the first reported results for any system on the FraCaS test suite.
Ordinal Common-sense Inference
TLDR
This work describes a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task, and annotates subsets of previously established datasets via the ordinal annotation protocol in order to analyze the distinctions between these and what is constructed.
...
1
2
3
4
5
...