# Semantic Reasoning with Differentiable Graph Transformations

@article{Cetoli2021SemanticRW, title={Semantic Reasoning with Differentiable Graph Transformations}, author={A. Cetoli}, journal={ArXiv}, year={2021}, volume={abs/2107.09579} }

This paper introduces a differentiable semantic reasoner, where rules are presented as a relevant set of graph transformations. These rules can be written manually or inferred by a set of facts and goals presented as a training set. While the internal representation uses embeddings in a latent space, each rule can be expressed as a set of predicates conforming to a subset of Description Logic. Keywords–Semantic Reasoning, Semantic Graphs, Graph Transformations, Differentiable Computing.

#### References

SHOWING 1-10 OF 13 REFERENCES

Differentiable Reasoning on Large Knowledge Bases and Natural Language

- Computer Science
- Knowledge Graphs for eXplainable Artificial Intelligence
- 2020

Greedy NTPs are proposed, an extension to NTP's addressing their complexity and scalability limitations, thus making them applicable to real-world datasets and a novel approach for jointly reasoning over KBs and textual mentions by embedding logic facts and natural language sentences in a shared embedding space. Expand

End-to-end Differentiable Proving

- Computer Science
- NIPS
- 2017

It is demonstrated that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules. Expand

NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

- Computer Science
- ACL
- 2019

A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation. Expand

A Description Logic Primer

- Computer Science
- ArXiv
- 2012

The main concepts and features are explained with examples before syn- tax and semantics of the DLSROIQ are defined in detail. Expand

Towards Neural Theorem Proving at Scale

- Computer Science
- ArXiv
- 2018

The Neural Theorem Prover model proposed by Rockt{\"{a}}schel and Riedel (2017) is focused on, a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. Expand

Neural-Symbolic Cognitive Reasoning

- Computer Science
- Cognitive Technologies
- 2009

This book is the first to offer a self-contained presentation of neural network models for a number of computer science logics, including modal, temporal, and epistemic logics and focuses on the benefits of integrating effective robust learning with expressive reasoning capabilities. Expand

STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving

- Computer Science, Mathematics
- IJCAI
- 1971

We describe a new problem solver called STRIPS that attempts to find a sequence of operators in a space of world models to transform a given initial world model into a model in which a given goal… Expand

GloVe: Global Vectors for Word Representation

- Computer Science
- EMNLP
- 2014

A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure. Expand

Learning Explanatory Rules from Noisy Data

- Computer Science, Mathematics
- J. Artif. Intell. Res.
- 2018

This paper proposes a Differentiable Inductive Logic framework, which can not only solve tasks which traditional ILP systems are suited for, but shows a robustness to noise and error in the training data which ILP cannot cope with. Expand

Attention is All you Need

- Computer Science
- NIPS
- 2017

A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. Expand