• Corpus ID: 201058651

Abductive Commonsense Reasoning

@article{Bhagavatula2019AbductiveCR,
  title={Abductive Commonsense Reasoning},
  author={Chandra Bhagavatula and Ronan Le Bras and Chaitanya Malaviya and Keisuke Sakaguchi and Ari Holtzman and Hannah Rashkin and Doug Downey and Scott Yih and Yejin Choi},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.05739}
}
Abductive reasoning is inference to the most plausible explanation. [] Key Method We conceptualize a new task of Abductive NLI and introduce a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, formulated as multiple choice questions for easy automatic evaluation. We establish comprehensive baseline performance on this task based on state-of-the-art NLI and language models, which leads to 68.9% accuracy, well below human performance (91.4%). Our analysis…

Figures and Tables from this paper

Generating Hypothetical Events for Abductive Inference

This work proposes a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by considering different possible events emerging from candidate hypotheses – events generated by LMI – and selecting the one that is most similar to the observed outcome.

Visual Abductive Reasoning

A new task and dataset, Visual Abductive Reasoning (VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations, and devise a strong baseline model, REASONER (causal-and-cascaded reasoning Transformer), which surpasses many famous video-language models, while still being far behind human performance.

Case-Based Abductive Natural Language Inference

This paper presents Case-Based Abductive Natural Language Inference (CB-ANLI), a model that addresses unseen inference problems by analogical transfer of prior explanations from similar examples that can be effectively integrated with sparse and dense pre-trained encoders to improve multi-hop inference.

LANGUAGE-BASED ABDUCTIVE REASONING

It is argued that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations.

Thinking Like a Skeptic: Defeasible Inference in Natural Language

From Defeasible NLI, both a classification and generation task for defeasible inference are developed, and it is demonstrated that the generation task is much more challenging.

Can Language Models perform Abductive Commonsense Reasoning?

This report reviews over some of the methodologies that were at-tempted to solve theductive Reasoning challenge, re-implement the baseline models, and analyzesSome of the weaknesses that current approaches have.

The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning

This work presents Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents, and collects 363K (clue, inference) pairs, which form a first-of-its-kind abductive visual reasoning dataset.

Interactive Model with Structural Loss for Language-based Abductive Reasoning

It is argued that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations.

ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning

This work presents ExplaGraphs, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction, and proposes a multi-level evaluation framework that check for the structural and semantic correctness of the generated graphs and their degree of match with ground-truth graphs.

RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge

RIDDLESENSE1, a new multiple-choice question answering task, is presented, which comes with the first large dataset (5.7k examples) for answering riddlestyle commonsense questions and it is pointed out that there is a large gap between the bestsupervised model and human performance.
...

References

SHOWING 1-10 OF 58 REFERENCES

Interpretation as Abduction

An approach to abductive inference, called “weighted abduction”, that has resulted in a significant simplification of how the problem of interpreting texts is conceptualized, can be combined with the older view of “parsing as deduction” to produce an elegant and thorough integration of syntax, semantics, and pragmatics.

Robust Textual Inference Via Learning and Abductive Reasoning

This approach can be viewed as combining statistical machine learning and classical logical reasoning, in the hope of marrying the robustness and scalability of learning with the preciseness and elegance of logical theorem proving.

ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.

One Hundred Challenge Problems for Logical Formalizations of Commonsense Psychology

This work presents a new set of challenge problems for the logical formalization of commonsense knowledge, called TriangleCOPA, which is specifically designed to support the development of logic-based commonsense theories, via two means.

SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

This paper introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning, and proposes Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data.

e-SNLI: Natural Language Inference with Natural Language Explanations

The Stanford Natural Language Inference dataset is extended with an additional layer of human-annotated natural language explanations of the entailment relations, which can be used for various goals, such as obtaining full sentence justifications of a model’s decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.

Annotation Artifacts in Natural Language Inference Data

It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes.

The Extraordinary Ordinary Powers of Abductive Reasoning

The psychology of cognition has been influenced by semiotic models of representation, but little work has been done relating semiotics and the process of cognition proper. In this paper, I argue that

Natural Logic for Textual Inference

This paper presents the first use of a computational model of natural logic---a system of logical inference which operates over natural language---for textual inference, and provides the first reported results for any system on the FraCaS test suite.

Ordinal Common-sense Inference

This work describes a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task, and annotates subsets of previously established datasets via the ordinal annotation protocol in order to analyze the distinctions between these and what is constructed.
...