# Differentiable Learning of Logical Rules for Knowledge Base Completion

@article{Yang2017DifferentiableLO, title={Differentiable Learning of Logical Rules for Knowledge Base Completion}, author={Fan Yang and Zhilin Yang and William W. Cohen}, journal={ArXiv}, year={2017}, volume={abs/1702.08367} }

Learned models composed of probabilistic logical rules are useful for many tasks, such as knowledge base completion. Unfortunately this learning problem is difficult, since determining the structure of the theory normally requires solving a discrete optimization problem. In this paper, we propose an alternative approach: a completely differentiable model for learning sets of first-order rules. The approach is inspired by a recently-developed differentiable logic, i.e. a subset of first-order…

## 26 Citations

### End-to-end Differentiable Proving

- Computer ScienceNIPS
- 2017

It is demonstrated that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.

### Can Graph Neural Networks Help Logic Reasoning?

- Computer ScienceArXiv
- 2019

It is revealed from the analysis that the representation power of GNN alone is not enough for such a task, and a more expressive variant is proposed, called ExpressGNN, which can perform effective probabilistic logic inference while being able to scale to a large number of entities.

### Combining Representation Learning with Logic for Language Processing

- Computer ScienceArXiv
- 2017

This thesis investigates different combinations of representation learning methods with logic for reducing the need for annotated training data, and for improving generalization.

### Efficient Probabilistic Logic Reasoning with Graph Neural Networks

- Computer ScienceICLR
- 2020

This paper proposes a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model, and leads to effective and efficient probabilistic logic reasoning in MLN.

### Closed-Form Solutions in Learning Probabilistic Logic Programs by Exact Score Maximization

- Computer ScienceSUM
- 2017

An algorithm that learns acyclic propositional probabilistic logic programs from complete data, by adapting techniques from Bayesian network learning, focuses on score-based learning and on exact maximum likelihood computations.

### Graph Neural Networks

- Computer ScienceDeep Learning on Graphs
- 2021

This paper proposes a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model, and leads to effective and efficient probabilistic logic reasoning in MLN.

### Bootstrapping Knowledge Graphs From Images and Text

- Computer ScienceFront. Neurorobot.
- 2019

A hybrid KG builder is proposed that combines a neural relation extractor resolving primary relations from input and a differentiable inductive logic programming (ILP) model that iteratively completes the KG.

### An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks

- Computer ScienceIJCNLP
- 2017

It is found that multiple- turn reasoning outperforms single-turn reasoning for all question and answer types; further, it is observed that enabling a flexible number of turns generally improves upon a fixed multiple-turn strategy.

### TensorLog : Deep Learning Meets Probabilistic Databases

- Computer Science
- 2017

An implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neuralnetwork infrastructure such as Tensorflow or Theano, which enables high-performance deep learning frameworks to be used for tuning the parameters of a Probabilistic logic.

### Reasoning for Local Graph Over Knowledge Graph With a Multi-Policy Agent

- Computer ScienceIEEE Access
- 2021

Experiments revealed that local graph reasoning with searching window had greater rewards than path reasoning, the proposed DBL-LSTM policy network improved all HITS@N(N = 1,3,5,10) compared to prior works, and that the multi-policy agent achieved higher hit rates than single- policy agent.

## References

SHOWING 1-10 OF 36 REFERENCES

### TensorLog: A Differentiable Deductive Database

- Computer ScienceArXiv
- 2016

A probabilistic deductive database, called TensorLog, in which reasoning uses a differentiable process, and it is shown that these functions can be composed recursively to perform inference in non-trivial logical theories containing multiple interrelated clauses and predicates.

### Learning the structure of Markov logic networks

- Computer ScienceICML
- 2005

An algorithm for learning the structure of MLNs from relational databases is developed, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks.

### Efficient inference and learning in a large knowledge base

- Computer ScienceMachine Learning
- 2015

This work presents a first-order probabilistic language called ProPPR, an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm, and develops a fast and easily-parallelized weight-learning algorithm for proPPR.

### Learning First-Order Logic Embeddings via Matrix Factorization

- Computer ScienceIJCAI
- 2016

This work aims at learning continuous low-dimensional embeddings for first-order logic from scratch by considering a structural gradient based structure learning approach to generate plausible inference formulas from facts and building grounded proof graphs using background facts, training examples, and these inference formulas.

### Structure Learning via Parameter Learning

- Computer ScienceCIKM
- 2014

This paper presents a novel structure-learning method for a new, scalable probabilistic logic called ProPPR that builds on the recent success of meta-interpretive learning methods in Inductive Logic Programming and extends it to a framework that enables robust and efficient structure learning of logic programs on graphs.

### Learning a Natural Language Interface with Neural Programmer

- Computer ScienceICLR
- 2017

This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset.

### Markov logic networks

- Computer ScienceMachine Learning
- 2006

Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach to combining first-order logic and probabilistic graphical models in a single representation.

### Neural Programmer: Inducing Latent Programs with Gradient Descent

- Computer ScienceICLR
- 2016

This work proposes Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations and finds that training the model is difficult, but it can be greatly improved by adding random noise to the gradient.

### Random Walk Inference and Learning in A Large Scale Knowledge Base

- Computer ScienceEMNLP
- 2011

It is shown that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for theknowledge base.

### Reasoning With Neural Tensor Networks for Knowledge Base Completion

- Computer ScienceNIPS
- 2013

An expressive neural tensor network suitable for reasoning over relationships between two entities given a subset of the knowledge base is introduced and performance can be improved when entities are represented as an average of their constituting word vectors.