• Corpus ID: 222310337

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

@inproceedings{Hwang2021COMETATOMIC2O,
  title={COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs},
  author={Jena D. Hwang and Chandra Bhagavatula and Ronan Le Bras and Jeff Da and Keisuke Sakaguchi and Antoine Bosselut and Yejin Choi},
  booktitle={AAAI},
  year={2021}
}
Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphs (CSKG) has been central to these advances as their diverse facts can be used and referenced by machine learning models for tackling new and challenging tasks. At the same time, there remain questions about the quality and coverage of these resources due to the massive scale required to comprehensively… 
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset
Reasoning over commonsense knowledge bases (CSKB) whose elements are in the form of free-text is an important yet hard task in NLP. While CSKB completion only fills the missing links within the
Analyzing Commonsense Emergence in Few-shot Knowledge Models
TLDR
The results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.
Understanding Few-Shot Commonsense Knowledge Models
TLDR
This work investigates training commonsense knowledge models in a fewshot setting with limited tuples per commonsense relation in the graph and finds that human quality ratings for knowledge produced from a few-shot trained system can achieve performance within 6% of knowledgeproduced from fully supervised systems.
BertNet: Harvesting Knowledge Graphs from Pretrained Language Models
TLDR
This work aims at harvesting symbolic KGs from the LMs, a new framework for automatic KG construction empowered by the neural LMs’ flexibility and scalability, and derives from diverse LMs a family of new KGs that contain a richer set of commonsense relations, including complex ones than the human-annotated KGs.
Commonsense Knowledge in Word Associations and ConceptNet
TLDR
An in-depth comparison of two large-scale resources of general knowledge: ConceptNet, an engineered relational database, and SWOW, a knowledge graph derived from crowd-sourced word associations shows empirically that both resources improve downstream task performance on commonsense reasoning benchmarks over text-only baselines.
DISCOS: Bridging the Gap between Discourse Knowledge and Commonsense Knowledge
TLDR
Experiments demonstrate that the proposed commonsense knowledge acquisition framework DISCOS can successfully convert discourse knowledge about eventualities from ASER, a large-scale discourse knowledge graph, into if-then Commonsense knowledge defined in ATOMIC without any additional annotation effort.
Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization
TLDR
This work thoroughly study the possible role of conceptualization in commonsense reasoning, formulates a framework to replicate human conceptual induction from acquiring abstract knowledge about abstract concepts, and develops tools for contextualization on ATOMIC, a large-scale human annotated CKG.
Commonsense Reasoning: \protect \@normalcr how do Neuro-Symbolic and Neuro-only approaches compare?
TLDR
This paper sets out to compare a Neuro-Symbolic model with mainstream Neuro-only models when they are tasked with solving commonsense reasoning problems, and indicates that there is no clear advantage to either approach.
Commonsense Reasoning: How do Neuro-Symbolic and Neuro-only Approaches Compare?
TLDR
This paper sets out to compare a Neuro-Symbolic model with mainstream Neuro-only models when they are tasked with solving commonsense reasoning problems, and indicates that there is no clear advantage to either approach.
Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference
TLDR
This work shows the effectiveness of using a common framework, Natural Language Inference (NLI), to solve diverse commonsense reasoning tasks, by leveraging transfer learning from large NLI datasets, and injecting crucial knowledge from commonsense sources such as ATOMIC 2020 and ConceptNet.
...
...

References

SHOWING 1-10 OF 46 REFERENCES
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning
TLDR
This paper proposes a textual inference framework for answering commonsense questions, which effectively utilizes external, structured commonsense knowledge graphs to perform explainable inferences.
TransOMCS: From Linguistic Graphs to Commonsense Knowledge
TLDR
Experimental results demonstrate the transferability of linguistic knowledge to commonsense knowledge and the effectiveness of the proposed approach in terms of quantity, novelty, and quality.
Commonsense Knowledge Mining from Pretrained Models
TLDR
This work develops a method for generating commonsense knowledge using a large, pre-trained bidirectional language model that can be used to rank a triple’s validity by the estimated pointwise mutual information between the two entities.
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
Commonsense Knowledge Base Completion
TLDR
This work develops neural network models for scoring tuples on arbitrary phrases and evaluates them by their ability to distinguish true held-out tuples from false ones and finds strong performance from a bilinear model using a simple additive architecture to model phrases.
Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning
TLDR
This paper introduces a new scoring method that casts a plausibility ranking task in a full-text format and leverages the masked language modeling head tuned during the pre-training phase and requires less annotated data than the standard classifier approach to reach equivalent performances.
PIQA: Reasoning about Physical Commonsense in Natural Language
TLDR
The task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA are introduced and analysis about the dimensions of knowledge that existing models lack are provided, which offers significant opportunities for future research.
Unsupervised Commonsense Question Answering with Self-Talk
TLDR
An unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks, inspired by inquiry-based discovery learning, which improves performance on several benchmarks and competes with models that obtain knowledge from external KBs.
...
...