Corpus ID: 222208985

RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs

  title={RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs},
  author={Meng Qu and Junkun Chen and Louis-Pascal Xhonneux and Yoshua Bengio and Jian Tang},
This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper… Expand

Figures and Tables from this paper

Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base Completion
It is shown that not all rule-based KBC models are the same and two distinct approaches that learn in one case: 1) a mixture of relations and the other 2)A mixture of paths are proposed. Expand
Knowledge Graph Reasoning with Relational Directed Graph
A novel relational structure, i.e., relational directed graph (r-digraph), which is composed of overlapped relational paths, is introduced to capture the knowledge graph (KG)’s structural information to address the above challenges by learning the RElational Digraph with a variant of GNN. Expand
Why Are You My Mother? : Using Family Trees to Probe Knowledge Graph Prediction Explanations
  • 2021
While there are a plethora of methods for link prediction in knowledge graphs, state-ofthe-art approaches are often black box, obfuscating model reasoning and thereby limiting the ability of users toExpand
Text-Graph Enhanced Knowledge Graph Representation Learning
  • Linmei Hu, Mengmei Zhang, +4 authors Zhiyuan Liu
  • Medicine, Computer Science
  • Frontiers in Artificial Intelligence
  • 2021
This paper proposes to model the whole auxiliary text corpus with a graph and presents an end-to-end text-graph enhanced KG embedding model, named Teger, which significantly outperforms several state-of-the-art methods. Expand
Link Prediction based on Tensor Decomposition for the Knowledge Graph of COVID-19 Antiviral Drug
Due to the large-scale spread of COVID-19, which has an important impact on human health and social economy, designing effective antiviral drugs for COVID-19 is a key to save lives. ExistingExpand
LPRules: Rule Induction in Knowledge Graphs Using Linear Programming
  • Sanjeeb Dash, Joao Goncalves
  • Computer Science
  • 2021
Knowledge graph (KG) completion is a well-studied problem in AI. Rule-based methods and embedding-based methods form two of the solution techniques. Rule-based methods learn first-order logic rulesExpand
Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction
The Neural BellmanFord Network (NBFNet) is proposed, a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm and outperforms existing methods by a large margin in both transductive and inductive settings. Expand


Probabilistic Logic Neural Networks for Reasoning
The probabilistic Logic Neural Network (pLogicNet), which combines the advantages of both methods and defines the joint distribution of all possible triplets by using a Markov logic network with first-order logic, which can be efficiently optimized with the variational EM algorithm. Expand
Differentiable Learning of Logical Rules for Knowledge Base Reasoning
A framework, Neural Logic Programming, is proposed that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model and outperforms prior work on multiple knowledge base benchmark datasets, including Freebase and WikiMovies. Expand
Structure Learning via Parameter Learning
This paper presents a novel structure-learning method for a new, scalable probabilistic logic called ProPPR that builds on the recent success of meta-interpretive learning methods in Inductive Logic Programming and extends it to a framework that enables robust and efficient structure learning of logic programs on graphs. Expand
Learning the structure of Markov logic networks
An algorithm for learning the structure of MLNs from relational databases is developed, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. Expand
Efficient Probabilistic Logic Reasoning with Graph Neural Networks
This paper proposes a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model, and leads to effective and efficient probabilistic logic reasoning in MLN. Expand
Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning
A new algorithm MINERVA is proposed, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity, and significantly outperforms prior methods. Expand
Learning Markov Logic Networks via Functional Gradient Boosting
This work proposes to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously, based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. Expand
Multi-Hop Knowledge Graph Reasoning with Reward Shaping
This work reduces the impact of false negative supervision by adopting a pretrained one-hop embedding model to estimate the reward of unobserved facts and counter the sensitivity to spurious paths of on-policy RL by forcing the agent to explore a diverse set of paths using randomly generated edge masks. Expand
DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning
A novel reinforcement learning framework for learning multi-hop relational paths is described, which uses a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector-space by sampling the most promising relation to extend its path. Expand
Learn to Explain Efficiently via Neural Logic Inductive Learning
This work proposes Neural Logic Inductive Learning (NLIL), an efficient differentiable ILP framework that learns first-order logic rules that can explain the patterns in the data. Expand