Learning Structure and Parameters of Stochastic Logic Programs

@inproceedings{Muggleton2002LearningSA,
  title={Learning Structure and Parameters of Stochastic Logic Programs},
  author={Stephen Muggleton},
  booktitle={ILP},
  year={2002}
}
Previous papers have studied learning of Stochastic Logic Programs (SLPs) either as a purely parametric estimation problem or separated structure learning and parameter estimation into separate phases. In this paper we consider ways in which both the structure and the parameters of an SLP can be learned simultaneously. The paper assumes an ILP algorithm, such as Progol or FOIL, in which clauses are constructed independently. We derive analytical and numerical methods for efficient computation… 
Towards Learning Stochastic Logic Programs from Proof-Banks
TLDR
This work studies how to learn stochastic logic programs from proof-trees by employing a greedy search guided by the maximum likelihood principle and failure-adjusted maximization in the least general generalization (lgg) operator.
MetaBayes: Bayesian Meta-Interpretative Learning Using Higher-Order Stochastic Refinement
TLDR
This paper shows how Meta-Interpretive Learning (MIL) can be extended to implement a Bayesian posterior distribution over the hypothesis space by treating the meta-interpreter as a Stochastic Logic Program.
ProbPoly-A Probabilistic Inductive Logic Programming Framework with Application in Learning Requirements
TLDR
The conclusions are that ProbPoly is a promising idea for PILP, with a well founded theoretical background, and spanning numerous directions for improvement, both in theoretical and technical aspects.
ProbPoly: a probabilistic inductive logic programming framework with application in model checking
TLDR
This work introduces a basic method for revising the probabilities of a simple discrete time Markov chain (DTMC) using an integration of ProbPoly and a probabilistic model checker, so that the properties, which were initially violated, are satisfied in the new DTMC.
PFORTE: Revising Probabilistic FOL Theories
TLDR
The first revision system for SRL classification, PFORTE, is described, which addresses two problems: all examples must beclassified, and they must be classified well.
Decision-Theoretic Logic Programs
TLDR
A new framework, Decision-theoretic Logic Programs (DTLPs), is proposed that extends Probabilistic ILP models by integrating desicion-making features developed in Statistical Decision Theory area and an implementation of DTLPs using Stochastic Logic Programs is introduced.
CLP(BN): Constraint Logic Programming for Probabilistic Knowledge
TLDR
The CLP(BN) language represents the joint probability distribution over missing values in a database or logic program by using constraints to represent Skolem functions.
Induction as a search procedure
TLDR
This chapter introduces Inductive Logic Programming from the perspective of search algorithms in Computer Science, and first briefly considers the Version Spaces approach to induction, and then focuses on ILP from its formal definition and main techniques and strategies.
Integrating Logic and Probability: Algorithmic Improvements in Markov Logic Networks
TLDR
This dissertation proposes a novel generative structure learning algorithm based on the iterated local search metaheuristic and extends IRoTS by proposing MC-IRoTS, an algorithm that combines MCMC methods and SAT solvers for the problem of conditional inference in MLNs.
...
...

References

SHOWING 1-10 OF 10 REFERENCES
Parameter Estimation in Stochastic Logic Programs
TLDR
A new algorithm called failure-adjusted maximisation (FAM) is presented, an instance of the EM algorithm that applies specifically to normalised SLPs and provides a closed-form for computing parameter updates within an iterative maximisation approach.
Learning Stochastic Logic Programs
  • S. Muggleton
  • Computer Science
    Electron. Trans. Artif. Intell.
  • 2000
TLDR
This paper discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modified to support learning of SLPs and shows that maximising the Bayesian posterior function involves finding SLPs with short derivations of the examples.
Loglinear models for first-order probabilistic reasoning
TLDR
This work shows how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data and compares the presented framework with other approaches to first-order probabilistic reasoning.
Efficient Induction of Logic Programs
TLDR
The concept of h-easy rlgg clauses is introduced and it is proved that the length of a certain class of \determinate" r lgg is bounded by a polynomial function of certain features of the background knowledge.
Learning Probabilities for Noisy First-Order Rules
TLDR
An approach that takes a knowledge base in an expressive rule-based first-order language, and leams the probabilistic parameters associated with those rules from data cases, and can handle data cases where many of the relevant aspects of the situation are unobserved.
Inductive Logic Programming: Issues, Results and the LLL Challenge (abstract)
TLDR
This work has shown that ILP approaches to natural language problems extend with relati v ease to various languages other than English and the area of Learning Language in Logic (LLL) is producing a number of challenges to existing ILP theory and implementation.
Learning from Positive Data
  • S. Muggleton
  • Computer Science
    Inductive Logic Programming Workshop
  • 1996
TLDR
New results are presented which show that within a Bayesian framework not only grammars, but also logic programs are learnable with arbitrarily low expected error from positive examples only and the upper bound for expected error of a learner which maximises the Bayes' posterior probability is within a small additive term of one which does the same from a mixture of positive and negative examples.
Learning Probabilistic Relational Models
TLDR
This paper describes both parameter estimation and structure learning -- the automatic induction of the dependency structure in a model and shows how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets.
Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper
Vibratory power unit for vibrating conveyers and screens comprising an asynchronous polyphase motor, at least one pair of associated unbalanced masses disposed on the shaft of said motor, with the