Commonsense Knowledge Base Completion with Structural and Semantic Context

@inproceedings{Malaviya2020CommonsenseKB,
  title={Commonsense Knowledge Base Completion with Structural and Semantic Context},
  author={Chaitanya Malaviya and Chandra Bhagavatula and Antoine Bosselut and Yejin Choi},
  booktitle={AAAI},
  year={2020}
}
Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB… Expand
Text-Graph Enhanced Knowledge Graph Representation Learning
  • Linmei Hu, Mengmei Zhang, +4 authors Zhiyuan Liu
  • Computer Science, Medicine
  • Frontiers in Artificial Intelligence
  • 2021
Knowledge Graphs (KGs) such as Freebase and YAGO have been widely adopted in a variety of NLP tasks. Representation learning of Knowledge Graphs (KGs) aims to map entities and relationships into aExpand
On the Role of Conceptualization in Commonsense Knowledge Graph Construction
TLDR
This work introduces to CKG construction methods conceptualization, i.e., to view entities mentioned in text as instances of specific concepts or vice versa, and builds synthetic triples by conceptualization. Expand
LEARNING CONTEXTUALIZED KNOWLEDGE STRUC-
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtainedExpand
LM4KG: Improving Common Sense Knowledge Graphs with Language Models
TLDR
This paper proposes to transform KG relations to natural language sentences, allowing us to utilize the information contained in large LMs to rate these sentences through a new perplexity-based measure, Refined Edge WEIGHTing (REWEIGHT). Expand
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Recently, commonsense knowledge models — pretrained language models (LM) finetuned on knowledge graph (KG) tuples — showed that considerable amounts of commonsense knowledge can be encoded in theExpand
Relational world knowledge representation in contextual language models: A review
TLDR
It is concluded that LMs and KBs are complementary representation tools, as KBs provide a high standard of factual precision which can in turn be flexibly and expressively modeled by LMs, and provide suggestions for future research in this direction. Expand
DualTKB: A Dual Learning Bridge between Text and Knowledge Base
TLDR
This work investigates the impact of weak supervision by creating a weakly supervised dataset and shows that even a slight amount of supervision can significantly improve the model performance and enable better-quality transfers. Expand
Understanding Few-Shot Commonsense Knowledge Models
TLDR
This work investigates training commonsense knowledge models in a fewshot setting with limited tuples per commonsense relation in the graph and finds that human quality ratings for knowledge produced from a few-shot trained system can achieve performance within 6% of knowledgeproduced from fully supervised systems. Expand
Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge
TLDR
It is shown that deep commonsense knowledge occupies a significant part of Commonsense knowledge while conventional methods fail to capture it effectively, and a novel method is proposed to mine the deep commonsens knowledge distributed in sentences, alleviating the reliance of conventional methods on the triple representation form of knowledge. Expand
Zero-shot Visual Question Answering using Knowledge Graph
TLDR
A Zero-shot VQA algorithm using knowledge graphs and a mask-based learning mechanism for better incorporating external knowledge is proposed and state-of-the-art performance is achieved. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 32 REFERENCES
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods. Expand
Modeling Relational Data with Graph Convolutional Networks
TLDR
It is shown that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline. Expand
Convolutional 2D Knowledge Graph Embeddings
TLDR
ConvE, a multi-layer convolutional network model for link prediction, is introduced and it is found that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets. Expand
Embedding Entities and Relations for Learning and Inference in Knowledge Bases
TLDR
It is found that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. Expand
Traversing Knowledge Graphs in Vector Space
TLDR
It is demonstrated that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results. Expand
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge. Expand
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
TLDR
A method for knowledge base completion includes encoding a knowledge base comprising entities and relations between the entities into embeddings for the entities and embeddins for the relations based on a Graph Convolutional Network. Expand
Commonsense Knowledge Base Completion
TLDR
This work develops neural network models for scoring tuples on arbitrary phrases and evaluates them by their ability to distinguish true held-out tuples from false ones and finds strong performance from a bilinear model using a simple additive architecture to model phrases. Expand
Commonsense for Generative Multi-Hop Question Answering Tasks
TLDR
This work focuses on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer. Expand
Commonsense Knowledge Mining from Pretrained Models
TLDR
This work develops a method for generating commonsense knowledge using a large, pre-trained bidirectional language model that can be used to rank a triple’s validity by the estimated pointwise mutual information between the two entities. Expand
...
1
2
3
4
...