Learning Knowledge Base Inference with Neural Theorem Provers

@inproceedings{Rocktschel2016LearningKB,
  title={Learning Knowledge Base Inference with Neural Theorem Provers},
  author={Tim Rockt{\"a}schel and S. Riedel},
  booktitle={AKBC@NAACL-HLT},
  year={2016}
}
In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base… Expand
34 Citations
Automated Rule Base Completion as Bayesian Concept Induction
  • 5
  • PDF
TensorLog: A Differentiable Deductive Database
  • 57
  • PDF
Lifted Rule Injection for Relation Embeddings
  • 93
  • PDF
Propositional Deductive Inference by Semantic Vectors
A differentiable approach to inductive logic programming
  • 1
  • PDF
Ontology Completion Using Graph Convolutional Networks
  • 5
  • PDF
Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures
  • 30
  • PDF
...
1
2
3
4
...

References

SHOWING 1-10 OF 61 REFERENCES
Traversing Knowledge Graphs in Vector Space
  • 248
  • PDF
Low-Dimensional Embeddings of Logic
  • 56
  • PDF
Reasoning With Neural Tensor Networks for Knowledge Base Completion
  • 1,303
  • PDF
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
  • 800
  • PDF
Knowledge-Based Artificial Neural Networks
  • 710
  • PDF
Neural Programmer-Interpreters
  • 271
  • PDF
...
1
2
3
4
5
...