COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

@inproceedings{Bosselut2019COMETCT,
  title={COMET: Commonsense Transformers for Automatic Knowledge Graph Construction},
  author={Antoine Bosselut and Hannah Rashkin and Maarten Sap and Chaitanya Malaviya and A. Çelikyilmaz and Yejin Choi},
  booktitle={ACL},
  year={2019}
}
We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017. [...] Key Result Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources.Expand
Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering
TLDR
Empirical results on the SocialIQa and StoryCommonsense datasets in a zero-shot setting demonstrate that using commonsense knowledge models to dynamically construct and reason over knowledge graphs achieves performance boosts over pre-trained language models and usingknowledge models to directly evaluate answers. Expand
Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering
Understanding narratives requires dynamically reasoning about the implicit causes, effects, and states of the situations described in text, which in turn requires understanding rich backgroundExpand
Exploiting Structural and Semantic Context for Commonsense Knowledge Base Completion
TLDR
This paper investigates two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. Expand
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed. Expand
On Symbolic and Neural Commonsense Knowledge Graphs
Recent years have brought about a renewed interest in commonsense representation and reasoning in the field of natural language understanding. The development of new commonsense knowledge graphsExpand
Commonsense Knowledge Base Completion with Structural and Semantic Context
TLDR
This paper investigates two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. Expand
Reasoning Paths Generation for Commonsense Question Answering
  • Peifeng Wang
  • 2019
Commonsense question answering (QA) requires a model to acquire some necessary background knowledge about how the world operates and people interact with each others. A large number of works haveExpand
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Recently, commonsense knowledge models — pretrained language models (LM) finetuned on knowledge graph (KG) tuples — showed that considerable amounts of commonsense knowledge can be encoded in theExpand
On the Role of Conceptualization in Commonsense Knowledge Graph Construction
TLDR
This work introduces to CKG construction methods conceptualization, i.e., to view entities mentioned in text as instances of specific concepts or vice versa, and builds synthetic triples by conceptualization. Expand
Language Generation with Multi-hop Reasoning on Commonsense Knowledge Graph
TLDR
This paper proposes Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph and empirically shows that the model outperforms existing baselines on three text generation tasks that require reasoning over Commonsense knowledge. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
Commonsense Knowledge Base Completion
TLDR
This work develops neural network models for scoring tuples on arbitrary phrases and evaluates them by their ability to distinguish true held-out tuples from false ones and finds strong performance from a bilinear model using a simple additive architecture to model phrases. Expand
Commonsense Knowledge Base Completion and Generation
TLDR
Experimental results show that the joint learning method improved completion accuracy and the generation model created reasonable knowledge, which could also be used to augment data and improve the accuracy of completion. Expand
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation. Expand
Knowledge vault: a web-scale approach to probabilistic knowledge fusion
TLDR
The Knowledge Vault is a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories that computes calibrated probabilities of fact correctness. Expand
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
TLDR
A new version of the linked open data resource ConceptNet is presented that is particularly well suited to be used with modern NLP techniques such as word embeddings, with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
Yago: a core of semantic knowledge
TLDR
YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts, which includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). Expand
Scalable knowledge harvesting with high precision and high recall
TLDR
A new notion of ngram-itemsets for richer patterns is proposed, and MaxSat-based constraint reasoning is used on both the quality of patterns and the validity of fact candidates, to use in a scalable system for high-quality knowledge harvesting. Expand
Zero-Shot Relation Extraction via Reading Comprehension
TLDR
It is shown that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels. Expand
Web-scale knowledge-base construction via statistical inference and learning
TLDR
This dissertation performs a systematic study on distant supervision to evaluate the impact of input sizes on the quality of KBC, and proposes two novel approaches that scale up Markov logic by orders of magnitude. Expand
...
1
2
3
4
...