• Corpus ID: 44098071

DeepProbLog: Neural Probabilistic Logic Programming

@inproceedings{Manhaeve2018DeepProbLogNP,
  title={DeepProbLog: Neural Probabilistic Logic Programming},
  author={Robin Manhaeve and Sebastijan Dumancic and Angelika Kimmig and Thomas Demeester and Luc De Raedt},
  booktitle={BNAIC/BENELEARN},
  year={2018}
}
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from examples. To the best of our knowledge, this work is the first to… 

DeepStochLog: Neural Stochastic Logic Programming

This work introduces neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end in neural-symbolic learning, and shows that inference and learning in neural stochastics logic programming scale much better than for neural probabilistic logic programs.

Approximate Inference for Neural Probabilistic Logic Programming

A method for approximate inference using an A*-like search, called DPLA* and an exploration strategy for proving in a neural-symbolic setting, and a parametric heuristic to guide the proof search are proposed.

NeuPSL: Neural Probabilistic Soft Logic

A novel neuro-symbolic framework that unites state-of-the-art symbolic reasoning with the low-level perception of deep neural networks is presented and how to seamlessly integrate neural and symbolic parameter learning and inference is shown.

DeepProbLog: Integrating Logic and Learning through Algebraic Model Counting

With DeepProbLog, it is shown how an integration of neural networks with logic and probability methods can be achieved by using algebraic model counting, and it is argued that neurosymbolic integrated methods should have the pure neural, logical and probabilistic methods as edge cases.

VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming

This work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model and provides support on the benefits of this neuro-symbolic integration both in terms of task generalization and data efficiency.

Training Neural Networks to Do Logic, with Logic

This work overviews combinatorial generation algorithms, with focus on lambda terms and related type inference algorithms, all elegantly expressible in a logic programming language that supports backtracking and unification, and introduces methods to train neural networks as theorem provers.

Semantic Probabilistic Layers for Neuro-Symbolic Learning

This work designs a predictive layer for structured-output prediction that can be plugged into any neural network guaranteeing its predictions are consistent with a set of symbolic constraints, and shows SPLs outperform competitors in terms of accuracy on challenging SOP tasks including hierarchical multi-label classification, path-to-end learning and preference learning, while retaining perfect constraint satisfaction.

Inference in relational neural machines

This paper proposes and compares different inference schemata for Relational Neural Machines together with some preliminary results to show the effectiveness of the proposed methodologies.
...

References

SHOWING 1-10 OF 77 REFERENCES

ProbLog2: Probabilistic Logic Programming

ProbLog2, the state of the art implementation of the probabilistic programming language ProbLog, is presented, which offers both command line access to inference and learning and a Python library for building statistical relational learning applications from the system's components.

Neural Logic Machines

The Neural Logic Machine is proposed, a neural-symbolic architecture for both inductive learning and logic reasoning that achieves perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world.

NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation.

TensorLog : Deep Learning Meets Probabilistic Databases

An implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neuralnetwork infrastructure such as Tensorflow or Theano, which enables high-performance deep learning frameworks to be used for tuning the parameters of a Probabilistic logic.

Learning Libraries of Subroutines for Neurally-Guided Bayesian Program Induction

The model is used to synthesize functions on lists, edit text, and solve symbolic regression problems, showing how the model learns a domain-specific library of program components for expressing solutions to problems in the domain.

Learning Relational Representations with Auto-encoding Logic Programs

A novel framework for relational representation learning that combines the best of both worlds, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks.

Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures

A lifted framework in which first-order rules are used to describe the structure of a given problem setting, which allows for a declarative specification of latent relational structures, which can then be efficiently discovered in a given data set using neural network learning.

Inference and learning in probabilistic logic programs using weighted Boolean formulas

The results show that the inference algorithms improve upon the state of the art in probabilistic logic programming, and that it is indeed possible to learn the parameters of a probabilist logic program from interpretations.

End-to-end Differentiable Proving

It is demonstrated that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.

Neural Programmer-Interpreters

The neural programmer-interpreter (NPI) is proposed, a recurrent and compositional neural network that learns to represent and execute programs and has the capability to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models.
...