Commonsense Reasoning for Natural Language Processing

@inproceedings{Sap2020CommonsenseRF,
  title={Commonsense Reasoning for Natural Language Processing},
  author={Maarten Sap and Vered Shwartz and Antoine Bosselut and Yejin Choi and Dan Roth},
  booktitle={ACL},
  year={2020}
}
Commonsense knowledge, such as knowing that “bumping into people annoys them” or “rain makes the road slippery”, helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. In recent years, commonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community, yielding exploratory studies in… 
An Atlas of Cultural Commonsense for Machine Reasoning
TLDR
This work introduces an approach that extends prior work on crowdsourcing commonsense knowledge by incorporating differences in knowledge that are attributable to cultural or national groups, and moves a step closer towards building a machine that doesn't assume a rigid framework of universal Commonsense knowledge, but rather has the ability to reason in a contextually and culturally sensitive way.
Commonsense Reasoning for Question Answering with Explanations
TLDR
A latent-variable model is proposed that identifies what type of knowledge from an external knowledge base may be relevant to answering the question, com-putes the commonsense inferences, and predicts the answer, and can learn to provide posterior rationales for why a certain answer was chosen.
Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey
TLDR
A survey of commonsense knowledge acquisition and reasoning tasks, the strengths and weaknesses of state-of-the-art pre-trained models for commonsense reasoning and generation as revealed by these tasks, and reflects on future research directions are presented.
Do Fine-tuned Commonsense Language Models Really Generalize?
TLDR
Clear evidence is found that fine-tuned commonsense language models still do not generalize well, even with moderate changes to the experimental setup, and may, in fact, be susceptible to dataset bias.
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
TLDR
SALKG, a simple framework for learning from KG explanations of both coarse and fine granularity, is proposed, which trains KG-augmented models to solve the task by focusing on KG information highlighted by the explanations as salient.
Commonsense-Focused Dialogues for Response Generation: An Empirical Study
TLDR
This paper auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph, and proposes an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pre-trained language and dialog models, and shows reasonable correlation with human evaluation of responses’ commonsense quality.
Grounding ‘Grounding’ in NLP
TLDR
This work investigates the gap between definitions of “grounding” in NLP and Cognitive Science, and presents ways to both create new tasks or repurpose existing ones to make advancements towards achieving a more complete sense of grounding.
Information to Wisdom: Commonsense Knowledge Extraction and Compilation
TLDR
This tutorial presents state-of-the-art methodologies towards the compilation and consolidation of commonsense knowledge (CSK), covering text-extraction-based, multi-modal and Transformer-based techniques, with special focus on the issues of web search and ranking, as of relevance to the WSDM community.
Scientia Potentia Est - On the Role of Knowledge in Computational Argumentation
TLDR
A pyramid of types of knowledge required in CA tasks is proposed, analysing the state of the art with respect to the reliance and exploitation of these types ofknowledge, for each of the for main research areas in CA, and outlining and discussing directions for future research efforts in CA are outlined.
...
...

References

SHOWING 1-10 OF 78 REFERENCES
Commonsense Reasoning for Natural Language Understanding: A Survey of Benchmarks, Resources, and Approaches
TLDR
This paper aims to provide an overview of existing tasks and benchmarks, knowledge resources, and learning and inference approaches toward commonsense reasoning for natural language understanding to support a better understanding of the state of the art, its limitations, and future challenges.
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework.
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
Commonsense Knowledge Aware Conversation Generation with Graph Attention
TLDR
This is the first attempt that uses large-scale commonsense knowledge in conversation generation, and unlike existing models that use knowledge triples (entities) separately and independently, this model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.
Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension
TLDR
A multi-knowledge reasoning model is proposed, which selects inference rules for a specific reasoning context using attention mechanism, and reasons by summarizing all valid inference rules.
KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning
TLDR
This paper proposes a textual inference framework for answering commonsense questions, which effectively utilizes external, structured commonsense knowledge graphs to perform explainable inferences.
Unsupervised Commonsense Question Answering with Self-Talk
TLDR
An unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks, inspired by inquiry-based discovery learning, which improves performance on several benchmarks and competes with models that obtain knowledge from external KBs.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
Improving Natural Language Inference Using External Knowledge in the Science Questions Domain
TLDR
A combination of techniques that harness knowledge graphs to improve performance on the NLI problem in the science questions domain and achieves the new state-of-the-art performance over the SciTail science questions dataset.
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning
TLDR
This paper introduces Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions, and proposes a new architecture that improves over the competitive baselines.
...
...