• Corpus ID: 240070688

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

@article{Ren2021SMOREKG,
  title={SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs},
  author={Hongyu Ren and Hanjun Dai and Bo Dai and Xinyun Chen and Denny Zhou and Jure Leskovec and Dale Schuurmans},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14890}
}
Knowledge graphs (KGs) capture knowledge in the form of head–relation–tail triples and are a crucial component in many AI systems. There are two important reasoning tasks on KGs: (1) single-hop knowledge graph completion, which involves predicting individual links in the KG; and (2), multi-hop reasoning, where the goal is to predict which KG entities satisfy a given logical query. Embedding-based methods solve both tasks by first computing an embedding for each entity and relation, then using… 

References

SHOWING 1-10 OF 45 REFERENCES
Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
TLDR
EmbedKGZA is particularly effective in performing multi-hop KGQA over sparse KGs, and relaxes the requirement of answer selection from a pre-specified neighborhood, a sub-optimal constraint enforced by previous multi- Hop KG QA methods.
LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs
TLDR
This work presents LEGO, a Latent Execution-Guided reasOning framework with key insights: execution-guided query synthesis and embedding-based query execution, which adaptively infers the new interpretable reasoning action and grows the query tree.
Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
TLDR
BetaE is the first method that can handle a complete set of first-order logical operations: conjunction, disjunction, and negation, and a key insight of BetaE is to use probabilistic distributions with bounded support, specifically the Beta distribution, and embed queries/entities as distributions, which as a consequence allows us to also faithfully model uncertainty.
Probabilistic Entity Representation Model for Chain Reasoning over Knowledge Graphs
TLDR
This paper proposes a Probabilistic Entity Representation Model (PERM) to encode entities as a Multivariate Gaussian density with mean and covariance parameters to capture its semantic position and smooth decision boundary, respectively and defines the closed logical operations of projection, intersection, and union that can be aggregated using an end-to-end objective function.
Neural-Answering Logical Queries on Knowledge Graphs
TLDR
This paper proposes an embedding based method (NewLook), which supports four types of logical operations and can answer queries with multiple variable nodes, and goes beyond the linear transformation assumption and thus consistently outperforms the existing methods.
KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning
TLDR
This paper proposes a textual inference framework for answering commonsense questions, which effectively utilizes external, structured commonsense knowledge graphs to perform explainable inferences.
Fuzzy Logic based Logical Query Answering on Knowledge Graph
TLDR
FuzzQE follows fuzzy logic to define logical operators in a principled and learning free manner for answering FOL queries over KGs and achieves significantly better performance in answering F OL queries compared to the state-of-the-art methods.
Query2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings
Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities
Embedding Logical Queries on Knowledge Graphs
TLDR
This work introduces a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs and demonstrates the utility of this framework in two application studies on real-world datasets with millions of relations.
Traversing Knowledge Graphs in Vector Space
TLDR
It is demonstrated that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.
...
1
2
3
4
5
...