DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning

  title={DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning},
  author={Wenhan Xiong and Thi-Lan-Giao Hoang and William Yang Wang},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
We study the problem of learning to reason in large scale knowledge graphs (KGs. [] Key Method In contrast to prior work, our approach includes a reward function that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.

Figures and Tables from this paper

Incorporating Graph Attention Mechanism into Knowledge Graph Reasoning Based on Deep Reinforcement Learning

This paper presents a deep reinforcement learning based model named by AttnPath, which incorporates LSTM and Graph Attention Mechanism as the memory components, and defines two metrics, Mean Selection Rate (MSR) and Mean Replacement Rate (MRR), to quantitatively measure how difficult it is to learn the query relations.

Path-Based Knowledge Graph Completion Combining Reinforcement Learning with Soft Rules

A model that combines the reinforcement learning (RL) framework with soft rules to learn reasoning path and adjusts the partially observed Markov decision process to extract the soft rules with different confidence levels from datasets is proposed.

GHC: G: Deep Reinforcement Learning for Heterogeneous Relational Reasoning in Knowledge Graphs

Heterogeneous Relational reasoning with Reinforcement Learning is developed, a type-enhanced RL agent that utilizes the local heterogeneous neighborhood information for efficient reasoning over knowledge graphs that outperforms state-of-the-art RL methods.

A Multi-Hop Link Prediction Approach Based on Reinforcement Learning in Knowledge Graphs

This work proposes a novel RL framework for learning more accurate link prediction models in KGs, and frames link prediction problem in KG as an inference problem in probabilistic graphical model (PGM) and uses maximum entropy RL to maximize the expected return.

Multi-Hop Knowledge Graph Reasoning with Reward Shaping

This work reduces the impact of false negative supervision by adopting a pretrained one-hop embedding model to estimate the reward of unobserved facts and counter the sensitivity to spurious paths of on-policy RL by forcing the agent to explore a diverse set of paths using randomly generated edge masks.

Rule-Aware Reinforcement Learning for Knowledge Graph Reasoning

A simple but effective RL-based method called RARL (RuleAware RL), which injects high quality symbolic rules into the model’s reasoning process and employs partially random beam search, which can not only increase the probability of paths getting rewards, but also alleviate the impact of spurious paths.

Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning

This method leverages high-quality rules generated by symbolic-based methods to provide reward supervision for walk-based agents and can separate the authors' walk- based agent into two sub-agents thus allowing for additional efficiency.



Traversing Knowledge Graphs in Vector Space

It is demonstrated that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.

Compositional Vector Space Models for Knowledge Base Completion

This paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically, composing the implications of a path using a recurrent neural network (RNN) that takes as inputs vector embeddings of the binary relation in the path.

Random Walk Inference and Learning in A Large Scale Knowledge Base

It is shown that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for theknowledge base.

Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

This paper learns to jointly reason about relations, entities, and entity-types, and uses neural attention modeling to incorporate multiple paths in a single RNN that represents logical composition across all relations.

Improving Learning and Inference in a Large Knowledge-Base using Latent Syntactic Cues

For the first time, it is demonstrated that addition of edges labeled with latent features mined from a large dependency parsed corpus of 500 million Web documents can significantly outperform previous PRAbased approaches on the KB inference task.

Knowledge Graph Embedding via Dynamic Mapping Matrix

A more fine-grained model named TransD, which is an improvement of TransR/CTransR, which not only considers the diversity of relations, but also entities, which makes it can be applied on large scale graphs.

Translating Embeddings for Modeling Multi-relational Data

TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision

A Neural Symbolic Machine is introduced, which contains a neural “programmer” that maps language utterances to programs and utilizes a key-variable memory to handle compositionality, and a symbolic “computer”, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space.

Toward an Architecture for Never-Ending Language Learning

This work proposes an approach and a set of design principles for an intelligent computer agent that runs forever and describes a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs.

Knowledge Graph Embedding by Translating on Hyperplanes

This paper proposes TransH which models a relation as a hyperplane together with a translation operation on it and can well preserve the above mapping properties of relations with almost the same model complexity of TransE.