“I’m Not Mad”: Commonsense Implications of Negation and Contradiction

  title={“I’m Not Mad”: Commonsense Implications of Negation and Contradiction},
  author={Liwei Jiang and Antoine Bosselut and Chandra Bhagavatula and Yejin Choi},
Natural language inference requires reasoning about contradictions, negations, and their commonsense implications. Given a simple premise (e.g., “I’m mad at you”), humans can reason about the varying shades of contradictory statements ranging from straightforward negations (“I’m not mad at you”) to commonsense contradictions (“I’m happy”). Moreover, these negated or contradictory statements shift the commonsense implications of the original premise in interesting and nontrivial ways. For… 

NegatER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases

Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, and informative—leading to statistically significant accuracy improvements in a challenging KB completion task and confirming that the positive knowledge in LMs can be “re-purposed” to generate negative knowledge.

A Question-Answer Driven Approach to Reveal Affirmative Interpretations from Verbal Negations

Experimental results show that state-of-the-art transformers trained with existing NLI corpora are insuf ficient to reveal affirmative interpretations, but the generation task remains a challenge as T5 substantially underperforms humans.

UnCommonSense: Informative Negative Knowledge about Everyday Concepts

This paper presents the U N C OMMON S ENSE framework for materializing informative negative commonsense statements, and shows that this method significantly outperforms the state-of-the-art.

“It doesn’t look good for a date”: Transforming Critiques into Preferences for Conversational Recommendation Systems

This work presents a method for transforming a user critique into a positive preference in order to retrieve reviews pertaining to potentially better recommendations, and shows that utilizing critique-to-preference transformation improves recommendations.

Negative Statements Considered Useful

Relational World Knowledge Representation in Contextual Language Models: A Review

This work proposes to organize knowledge representation strategies in LMs by the level of KB supervision provided, from no KB supervision at all to entity- and relation-level supervision, and provides a high-level, extensible taxonomy for knowledge representation in L Ms.

Some Reflections on Drawing Causal Inference using Textual Data: Parallels Between Human Subjects and Organized Texts

It is hoped this article would raise the awareness of the importance of articulating and clarifying fundamental concepts before delving into developing methodologies when drawing causal inference using textual data.

Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems

This work proposes an exploratory taxonomy that spans these two targets and the different threat model dimensions, and designs and proposes several potential attack methods, showing that it is possible to subtly modify claim-salient snippets in the evidence, in addition to generating diverse and claim-aligned evidence.

Synthetic Disinformation Attacks on Automated Fact Verification Systems

This work explores the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings: ADVERSARIAL ADDITION, where documents are fabricate and added to the evidence repository available to the fact-checking system, and ADVERSarIAL MODIFICATION, where existing evidence source documents in the repository are automatically altered.

PInKS: Preconditioned Commonsense Inference with Minimal Supervision

It is shown that PInKS improves the results on benchmarks focused on reasoning with the preconditions of commonsense knowledge (up to 40% Macro-F1 scores), and is investigated through PAC-Bayesian informativeness analysis, precision measures and ablation study.



Abductive Commonsense Reasoning

This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language.

ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.

Thinking Like a Skeptic: Defeasible Inference in Natural Language

From Defeasible NLI, both a classification and generation task for defeasible inference are developed, and it is demonstrated that the generation task is much more challenging.

Social IQA: Commonsense Reasoning about Social Interactions

It is established that Social IQa, the first large-scale benchmark for commonsense reasoning about social situations, is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap).

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

This paper presents a novel approach that generates contextually-relevant symbolic knowledge structures on demand using generative neural commonsense knowledge models and achieves significant performance boosts over pretrained language models and vanilla knowledge models, all while providing interpretable reasoning paths for its predictions.

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.

An Atlas of Cultural Commonsense for Machine Reasoning

This work introduces an approach that extends prior work on crowdsourcing commonsense knowledge by incorporating differences in knowledge that are attributable to cultural or national groups, and moves a step closer towards building a machine that doesn't assume a rigid framework of universal Commonsense knowledge, but rather has the ability to reason in a contextually and culturally sensitive way.

PIQA: Reasoning about Physical Commonsense in Natural Language

The task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA are introduced and analysis about the dimensions of knowledge that existing models lack are provided, which offers significant opportunities for future research.

Translating Negation: A Manual Error Analysis

An informative empirical error analysis can be formulated in terms of the set of semantic elements involved in the meaning of negation, and a small set of string-based operations that can characterise errors in the translation of those elements.

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.