Corpus ID: 218487841

A Joint Framework for Inductive Representation Learning and Explainable Reasoning in Knowledge Graphs

@article{Bhowmik2020AJF,
  title={A Joint Framework for Inductive Representation Learning and Explainable Reasoning in Knowledge Graphs},
  author={Rajarshi Bhowmik and Gerard de Melo},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.00637}
}
Despite their large-scale coverage, existing cross-domain knowledge graphs invariably suffer from inherent incompleteness and sparsity, necessitating link prediction that requires inferring a target entity, given a source entity and a query relation. Recent approaches can broadly be classified into two categories: embedding-based approaches and path-based approaches. In contrast to embedding-based approaches, which operate in an uninterpretable latent semantic vector space of entities and… Expand

References

SHOWING 1-10 OF 32 REFERENCES
Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text
TLDR
This paper proposes the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length, while modeling both relation types and intermediate nodes in the compositional path representations. Expand
Modeling Relational Data with Graph Convolutional Networks
TLDR
It is shown that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline. Expand
Compositional Vector Space Models for Knowledge Base Completion
TLDR
This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recurrent neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path. Expand
Embedding Entities and Relations for Learning and Inference in Knowledge Bases
TLDR
It is found that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. Expand
Learning Entity and Relation Embeddings for Knowledge Graph Completion
TLDR
TransR is proposed to build entity and relation embeddings in separate entity space and relation spaces to build translations between projected entities and to evaluate the models on three tasks including link prediction, triple classification and relational fact extraction. Expand
Traversing Knowledge Graphs in Vector Space
TLDR
It is demonstrated that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results. Expand
Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach
TLDR
This paper uses graph neural networks (Graph-NNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time to solve the out-of-knowledge-base (OOKB) entity problem in KBC. Expand
Improving Learning and Inference in a Large Knowledge-Base using Latent Syntactic Cues
TLDR
For the first time, it is demonstrated that addition of edges labeled with latent features mined from a large dependency parsed corpus of 500 million Web documents can significantly outperform previous PRAbased approaches on the KB inference task. Expand
An overview of embedding models of entities and relationships for knowledge base completion
TLDR
This paper serves as a comprehensive overview of embedding models of entities and relationships for knowledge base completion, summarizing up-to-date experimental results on standard benchmark datasets. Expand
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks. Expand
...
1
2
3
4
...