# Thinking Like a Skeptic: Defeasible Inference in Natural Language

@inproceedings{Rudinger2020ThinkingLA,
title={Thinking Like a Skeptic: Defeasible Inference in Natural Language},
author={Rachel Rudinger and Vered Shwartz and Jena D. Hwang and Chandra Bhagavatula and Maxwell Forbes and Ronan Le Bras and Noah A. Smith and Yejin Choi},
booktitle={FINDINGS},
year={2020}
}
Defeasible inference is a mode of reasoning in which an inference (X is a bird, therefore X flies) may be weakened or overturned in light of new evidence (X is a penguin). Though long recognized in classical AI and philosophy, defeasible inference has not been extensively studied in the context of contemporary data-driven research on natural language inference and commonsense reasoning. We introduce Defeasible NLI (abbreviated \delta-NLI), a dataset for defeasible inference in natural language… Expand

#### Figures and Tables from this paper

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
• Computer Science
• AAAI
• 2021
This paper investigates multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and trains generative models capable of composing explanatory rationales for unseen instances. Expand
Could you give me a hint ? Generating inference graphs for defeasible reasoning
• Computer Science
• FINDINGS
• 2021
This paper automatically generates meaningful graphs for the defeasible inference task through transfer learning from a related NLP task that shares the kind of reasoning that inference graphs support. Expand
Think about it! Improving defeasible reasoning by first modeling the question scenario
• Computer Science
• EMNLP
• 2021
This work achieves a new state-of-the-art on three different defeasible reasoning datasets and illustrates that performance can be improved by guiding a system to “think about” a question and explicitly model the scenario, rather than answering reflexively. Expand
• Computer Science
• NAACL
• 2021
This paper introduces ANION, a new commonsense knowledge graph with 624K if-then rules focusing on negated and contradictory events, and presents joint generative and discriminative inference models for this new resource, providing novel empirical insights on how logical negations and commonsense contradictions reshape the commonsense implications of their original premises. Expand
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2
• Computer Science
• ArXiv
• 2021
Thinking aloud is an effective meta-cognitive strategy human reasoners apply to solve difficult problems. We suggest to improve the reasoning ability of pre-trained neural language models in aExpand
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
• Computer Science
• FINDINGS
• 2021
This work proposes a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a questionanswering (QA) model’s unethical behavior by communicating context-specific principles of ethics and equity to it. Expand
CoreQuisite: Circumstantial Preconditions of Common Sense Knowledge
• Computer Science
• ArXiv
• 2021
A dataset is presented, called CoreQuisite, which annotates commonsense facts with preconditions expressed in natural language, and it is shown that there is a 10-30%gap between machine and human performance on these tasks. Expand
Exploring RoBERTa ’ s theory of mind through textual entailment
• 2021
Within psychology, philosophy, and cognitive science, theory of mind refers to the cognitive ability to reason about the mental states of other people, thus recognizing them as having beliefs,Expand
Improving Neural Model Performance through Natural Language Feedback on Their Explanations
This work introduces MERCURIE, an interactive system that refines its explanations for a given reasoning task by getting human feedback in natural language, and generates graphs that have 40% fewer inconsistencies as compared with the off-the-shelf system. Expand
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences
• Computer Science
• EMNLP
• 2021
This work investigates whether contemporary NLG models can function as behavioral priors for systems deployed in social settings by generating action hypotheses that achieve predefined goals under moral constraints and introduces Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goaloriented social reasoning. Expand

#### References

SHOWING 1-10 OF 55 REFERENCES
Natural language inference
• Computer Science
• 2009
This dissertation explores a range of approaches to NLI, beginning with methods which are robust but approximate, and proceeding to progressively more precise approaches, and greatly extends past work in natural logic to incorporate both semantic exclusion and implicativity. Expand
Abductive Commonsense Reasoning
This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language. Expand
A large annotated corpus for learning natural language inference
• Computer Science
• EMNLP
• 2015
The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time. Expand
Annotation Artifacts in Natural Language Inference Data
• Computer Science
• NAACL
• 2018
It is shown that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI and 53% of MultiNLI, and that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Expand
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation. Expand
Ordinal Common-sense Inference
• Computer Science
• TACL
• 2017
This work describes a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task, and annotates subsets of previously established datasets via the ordinal annotation protocol in order to analyze the distinctions between these and what is constructed. Expand
Defeasible Reasoning
A general theory of warrant, based on defeasible reasons, is developed and used as guide in the construction of theory of defeosible reasoning, and a computer program implementing that theory is developed. Expand
Uncertain Natural Language Inference
• Computer Science
• ACL
• 2020
The feasibility of collecting annotations for UNLI is demonstrated by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. Expand
Natural language inference.
The paper describes the way in which a Preference Semantics system for natural language analysis and generation tackles a difficult class of anaphoric inference problems (finding the correct referentExpand
SOME PHILOSOPHICAL PROBLEMS FROM THE STANDPOINT OF ARTI CIAL INTELLIGENCE
The formalism of this paper represents an advance over McCarthy (1963) and Green (1969) in that it permits proof of the correctness of strategies that contain loops and strategies that involve the acquisition of knowledge; and it is also somewhat more concise. Expand