ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

@article{Sap2019ATOMICAA,
  title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},
  author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},
  journal={ArXiv},
  year={2019},
  volume={abs/1811.00146}
}
We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. [...] Key Result Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.Expand
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.
Neural-Symbolic Commonsense Reasoner with Relation Predictors
TLDR
A neural-symbolic reasoner, capable of reasoning over large-scale dynamic CKGs, and the logic rules for reasoning over CKGs are learned during training by the model, which helps to generalise prediction to newly introduced events.
Social Commonsense Reasoning with Multi-Head Knowledge Attention
TLDR
This work proposes a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell, and is the first to demonstrate that a model that learns to perform counterfactual reasoning helps predicting the best explanation in an abductive reasoning task.
CIDER: Commonsense Inference for Dialogue Explanation and Reasoning
TLDR
This work introduces CIDER – a manually curated dataset that contains dyadic dialogue explanations in the form of implicit and explicit knowledge triplets inferred using contextual commonsense inference that can be conducive to improving several downstream applications.
On Symbolic and Neural Commonsense Knowledge Graphs
Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs
KGR^4: Retrieval, Retrospect, Refine and Rethink for Commonsense Generation
  • Xin Liu, Dayiheng Liu, +6 authors Jinsong Su
  • Computer Science
    ArXiv
  • 2021
TLDR
A novel Knowledge-enhanced Commonsense Generation framework, termed KGR, consisting of four stages: Retrieval, Retrospect, Refine, Rethink, which selects the output sentence from candidate sentences produced by generators with different hyper-parameters.
Commonsense Knowledge in Word Associations and ConceptNet
TLDR
An in-depth comparison of two large-scale resources of general knowledge: ConceptNet, an engineered relational database, and SWOW, a knowledge graph derived from crowd-sourced word associations shows empirically that both resources improve downstream task performance on commonsense reasoning benchmarks over text-only baselines.
ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning
TLDR
This work presents EXPLAGRAPHS, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction, and proposes a multi-level evaluation framework that checks for the structural and semantic correctness of the generated graphs and their degree of match with ground-truth graphs.
Commonsense Reasoning for Natural Language Processing
TLDR
This tutorial organizes this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.
Abductive Commonsense Reasoning
TLDR
This study introduces a challenge dataset, ART, that consists of over 20k commonsense narrative contexts and 200k explanations, and conceptualizes two new tasks -- Abductive NLI: a multiple-choice question answering task for choosing the more likely explanation, and Abduction NLG: a conditional generation task for explaining given observations in natural language.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 29 REFERENCES
EventNet: Inferring Temporal Relations Between Commonsense Events
TLDR
EventNet is a toolkit for inferring temporal relations between Commonsense events that comprises 10,000 nodes and 30,000 temporal links mined from the Openmind Commonsense Knowledge Base and finds semantically similar nodes to dynamically search the knowledge base.
Event2Mind: Commonsense Inference on Events, Intents, and Reactions
TLDR
It is demonstrated how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.
Did It Happen? The Pragmatic Complexity of Veridicality Assessment
TLDR
This work extends the FactBank corpus, which contains semantically driven veridicality annotations, with pragmatically informed ones and shows that context and world knowledge play a significant role in shaping verdicality.
Unsupervised Learning of Narrative Event Chains
TLDR
A three step process to learning narrative event chains using unsupervised distributional methods to learn narrative relations between events sharing coreferring arguments and introduces two evaluations: the narrative cloze to evaluate event relatedness, and an order coherence task to evaluate narrative order.
A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories
TLDR
A new framework for evaluating story understanding and script learning: the `Story Cloze Test’, which requires a system to choose the correct ending to a four-sentence story, and a new corpus of 50k five- Sentence commonsense stories, ROCStories, to enable this evaluation.
SemEval-2012 Task 7: Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning
TLDR
The two systems that competed in this task as part of SemEval-2012 are described, and their results are compared to those achieved in previously published research.
Can we derive general world knowledge from texts
TLDR
Preliminary results of the first phase are reported, which indicate the feasibility of the project, and its likely limitations.
Building machines that learn and think like people
TLDR
It is argued that truly human-like learning and thinking machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems, and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations.
Embedding Entities and Relations for Learning and Inference in Knowledge Bases
TLDR
It is found that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication.
WebChild 2.0 : Fine-Grained Commonsense Knowledge Distillation
TLDR
This paper presents a system based on a series of algorithms to distill fine-grained disambiguated commonsense knowledge from massive amounts of text.
...
1
2
3
...