NOPE: A Corpus of Naturally-Occurring Presuppositions in English

  title={NOPE: A Corpus of Naturally-Occurring Presuppositions in English},
  author={Alicia Parrish and Sebastian Schuster and Alex Warstadt and Omar Agha and Soo-hwan Lee and Zhuoye Zhao and Sam Bowman and Tal Linzen},
Understanding language requires grasping not only the overtly stated content, but also making inferences about things that were left unsaid. These inferences include presuppositions, a phenomenon by which a listener learns about new information through reasoning about what a speaker takes as given. Presuppositions require complex understanding of the lexical and syntactic properties that trigger them as well as the broader conversational context. In this work, we introduce the Naturally… 

Figures and Tables from this paper

Polish Natural Language Inference and Factivity -- an Expert-based Dataset and Benchmarks
Despite recent breakthroughs in Machine Learning for Natural Language Processing, the Natural Language Inference (NLI) problems still constitute a challenge. To this purpose we contribute a new


How well do NLI models capture verb veridicality?
It is shown that, encouragingly, BERT’s inferences are sensitive not only to the presence of individual verb types, but also to the syntactic role of the verb, the form of the complement clause (to- vs. that-complements), and negation.
Lexical Alternatives as a Source of Pragmatic Presuppositions
The project of using the lexical representation without a semantic presupposition, and deriving the pragmatically presupposed status of the factive implication by conversational reasoning is introduced.
On the Conversational Basis of Some Presuppositions
This paper, originally published in 2001, deals with the question of the source of presuppositions, focusing on the question of whether presuppositions are conventional properties of linguistic
Experimental investigations of the typology of presupposition triggers
The behaviour of presupposition triggers in human language has been extensively studied and given rise to many distinct theoretical proposals. One intuitively appealing way of characterising
Triggering Presuppositions
While presuppositions are often thought to be lexically encoded, researchers have repeatedly argued for 'triggering algorithms' that productively classify certain entailments as presuppositions. We
On the Syntactic Marking of Presupposed Open Propositions
A subset of the non-truth-conditional inferences that correlate with the syntactic form of a sentence uttered is the subject of this paper.
What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
A suite of diagnostics drawn from human language experiments are introduced, which allow us to ask targeted questions about information used by language models for generating predictions in context, and the popular BERT model is applied.
Predicting the presuppositions of soft triggers
The central idea behind this paper is that presuppositions of soft triggers arise from the way our attention structures the informational content of a sentence. Some aspects of the information
A recursive definition of "satisfaction of presupposition" is proposed that makes it unnecessary to have any explicit method for assigning presuppositions to compound sentences.
Harnessing the linguistic signal to predict scalar inferences
This work shows that an LSTM-based sentence encoder trained on an English dataset of human inference strength ratings is able to predict ratings with high accuracy, and probes the model’s behavior using manually constructed minimal sentence pairs and corpus data.