CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

  title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
  author={Abhilasha Ravichander and Matt Gardner and Ana Marasovi{\'c}},
The full power of human language-based com-munication cannot be realized without negation. All human languages have some form of negation. Despite this, negation remains a challenging phenomenon for current natural language understanding systems. To facilitate the future development of models that can process negation effectively, we present C ONDA QA, the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect… 



A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs

A benchmark collection of NLI examples that are grammatical and correctly labeled, as a result of manual inspection and reformulation is presented to probe the negation-awareness of multilingual language models and finds that models that correctly predict examples with negation cues, often fail to correctly predict their counter-examples withoutnegation cues.

Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

This work presents a new crowdsourced dataset containing more than 24K span-selection questions that require resolving coreference among entities in over 4.7K English paragraphs from Wikipedia, and shows that state-of-the-art reading comprehension models perform significantly worse than humans on this benchmark.

VQA-LOL: Visual Question Answering under the Lens of Logic

This paper proposes a model which uses question-attention and logic-att attention to understand logical connectives in the question, and a novel Frechet-Compatibility Loss, which ensures that the answers of the component questions and the composed question are consistent with the inferred logical operation.

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

The results show that pretraining on CCG—the authors' most syntactic objective—performs the best on average across their probing tasks, suggesting that syntactic knowledge helps function word comprehension.

Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

It is found that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNNI fine-tuning addresses this failure, suggesting that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.

It’s not a Non-Issue: Negation as a Source of Error in Machine Translation

Through thorough analysis, it is found that indeed the presence of negation can significantly impact downstream quality, in some cases resulting in quality reductions of more than 60%.

Stress Test Evaluation for Natural Language Inference

This work proposes an evaluation methodology consisting of automatically constructed “stress tests” that allow us to examine whether systems have the ability to make real inferential decisions, and reveals strengths and weaknesses of these models with respect to challenging linguistic phenomena.

Improving negation detection with negation-focused pre-training

This work proposes a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into language models.

Know What You Don’t Know: Unanswerable Questions for SQuAD

SQuadRUn is a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.

An Analysis of Negation in Natural Language Understanding Corpora

This paper analyzes negation in eight popular corpora spanning six natural language understanding tasks and concludes that new corpora accounting for negation are needed to solve natural languageUnderstanding tasks when negation is present.