Probabilistic Inductive Logic Programming

@inproceedings{Raedt2004ProbabilisticIL,
  title={Probabilistic Inductive Logic Programming},
  author={Luc De Raedt and Kristian Kersting},
  booktitle={Probabilistic Inductive Logic Programming},
  year={2004}
}
Probabilistic inductive logic programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive logic… 
Logic, Probability and Learning, or an Introduction to Statistical Relational Learning
TLDR
This tutorial starts from classical settings for logic learning namely learning from entailment, learning from interpretations, and learning from proofs, and shows how they can be extended with probabilistic methods.
Probabilistic Logic Learning - A Tutorial Abstract
TLDR
This tutorial starts from classical settings for logic learning namely learning from entailment, learning from interpretations, and learning from proofs, and shows how they can be extended with probabilistic methods.
A History of Probabilistic Inductive Logic Programming
TLDR
An overview of PILP is presented and the main results are discussed, showing how structure learning systems use parameter learning as a subroutine to improve the quality of their results.
Statistical Relational Artificial Intelligence: Logic, Probability, and Computation
TLDR
This book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extensions of Bayesian networks.
Learning the Parameters of Probabilistic Logic Programs from Interpretations
TLDR
A novel parameter estimation algorithm LFI-ProbLog for learning ProbLog programs from partial interpretations, essentially a Soft-EM algorithm that constructs a propositional logic formula for each interpretation that is used to estimate the marginals of the probabilistic parameters.
Probabilistic Relational Learning and Inductive Logic Programming at a Global Scale
  • D. Poole
  • Computer Science, Philosophy
    ILP
  • 2010
TLDR
This talk outlines how publishing ontologies, data, and probabilistic hypotheses/theories can let us base beliefs on evidence, and how the resulting world-wide mind can go beyond the aggregation of human knowledge.
Probabilistic Inductive Logic Programming Based on Answer Set Programming
We propose a new formal language for the expressive representation of probabilistic knowledge based on Answer Set Programming (ASP). It allows for the annotation of first-order formulas as well as
On probabilistic inference in relational conditional logics
TLDR
This work proposes two different semantics and model theories for interpreting first-order probabilistic conditional logic, addresses the problems of ambiguity that are raised by the difference between subjective and statistical views, and develops a comprehensive list of desirable properties for inductive model-based probabilism inference in relational frameworks.
Discriminative Learning with Markov Logic Networks
TLDR
This proposal presents two new discriminative learning algorithms for Markov logic networks, one of which outperforms existing learning methods for MLNs and traditional ILP systems in terms of predictive accuracy, and its performance is comparable to state of theart results on some ILP benchmarks.
Inductive Logic Boosting
TLDR
Inductive Logic Boosting framework is proposed to transform the relational dataset into a feature-based dataset, induces logic rules by boosting Problog Rule Trees and relaxes the independence constraint of pseudo-likelihood.
...
...

References

SHOWING 1-10 OF 89 REFERENCES
Towards Combining Inductive Logic Programming with Bayesian Networks
TLDR
This paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.
Advances in Inductive Logic Programming
TLDR
A state-of-the-art overview of Inductive Logic Programming is provided, based on the succesful ESPRIT basic research project no. 6020, which can be used as a thorough introduction to the field.
Basic Principles of Learning Bayesian Logic Programs
TLDR
This chapter shows how the qualitative components of Bayesian logic programs can be learned by combining the inductive logic programming setting learning from interpretations with score-based techniques for learning Bayesian networks.
Towards Learning Stochastic Logic Programs from Proof-Banks
TLDR
This work studies how to learn stochastic logic programs from proof-trees by employing a greedy search guided by the maximum likelihood principle and failure-adjusted maximization in the least general generalization (lgg) operator.
Inductive logic programming - from machine learning to software engineering
TLDR
An extended, up-to-date survey of ILP, emphasizing methods and systems suitable for software engineering applications, including inductive program development, testing, and maintenance is provided.
Stochastic Logic Programs
TLDR
Stochastic logic programs are introduced as a means of providing a structured deenition of such a probability distribution and it is shown that the probabilities can be computed directly for fail-free logic programs and by normalisation for arbitrary logic programs.
Loglinear models for first-order probabilistic reasoning
TLDR
This work shows how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data and compares the presented framework with other approaches to first-order probabilistic reasoning.
Inductive Logic Programming: Theory and Methods
Learning Stochastic Logic Programs
  • S. Muggleton
  • Computer Science
    Electron. Trans. Artif. Intell.
  • 2000
TLDR
This paper discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modified to support learning of SLPs and shows that maximising the Bayesian posterior function involves finding SLPs with short derivations of the examples.
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
TLDR
A logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution, and a new EM algorithm that can significantly outperform the Inside-Outside algorithm.
...
...