Inductive logic programming at 30: a new introduction

@article{Cropper2022InductiveLP,
  title={Inductive logic programming at 30: a new introduction},
  author={Andrew Cropper and Sebastijan Dumancic},
  journal={ArXiv},
  year={2022},
  volume={abs/2008.07912}
}
Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and… 

Tables from this paper

Inductive logic programming at 30
TLDR
As ILP turns 30, a review of the last decade of research focuses on new meta-level search methods, techniques for learning recursive programs, new approaches for predicate invention, and the use of different technologies.
Learning programs by learning from failures
TLDR
Popper is introduced, an ILP system that implements this approach by combining answer set programming and Prolog, and shows that constraints drastically improve learning performance, and Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.
Learning Symbolic Operators for Task and Motion Planning
TLDR
This work proposes a bottom-up relational learning method for operator learning and shows how the learned operators can be used for planning in a TAMP system, finding this approach to substantially outperform several baselines, including three graph neural network-based model-free approaches from the recent literature.
PRIMA: Planner-Reasoner Inside a Multi-task Reasoning Agent
TLDR
This work proposes a Planner-Reasoner framework capable of state-of-the-art MTR capability and high efficiency, and trains the entire model in an end-to-end manner using deep reinforcement learning, and experimental studies over a variety of domains validate its effectiveness.
Preprocessing in Inductive Logic Programming
TLDR
It is shown experimentally that bottom preprocessing can reduce learning times of ILP systems on hard problems, and can be especially significant when the amount of background knowledge in the problem is large.
Efficient lifting of symmetry breaking constraints for complex combinatorial problems
TLDR
This work extends the learning framework and implementation of a model-based approach for Answer Set Programming to overcome limitations and address challenging problems, such as the Partner Units Problem.
An Interactive Explanatory AI System for Industrial Quality Control
TLDR
An approach for an interactive support system for classifications in an industrial quality control setting that combines the advantages of both knowledge-driven and data-driven machine learning methods, in particular inductive logic programming and convolutional neural networks, with human expertise and control is proposed.
FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for Multi-Category Classification of Mixed Data
TLDR
The FOLD-RM algorithm is competitive in performance with the widely-used X GBoost algorithm, however, unlike XGBoost, the FOLD the algorithm produces an explainable model and provides human-friendly explanations for predictions.
FOLD-RM: A Scalable, Efficient, and Explainable Inductive Learning Algorithm for Multi-Category Classification of Mixed Data
TLDR
The FOLD-RM algorithm is competitive in performance with the widely-used, state-of-the-art algorithms such as XGBoost and multi-layer perceptrons (MLPs), however, unlike these algorithms, the FOLD -RM algorithm produces an explainable model.
EvoLearner: Learning Description Logics with Evolutionary Algorithms
TLDR
EvoLearner is proposed—an evolutionary approach to learn concepts in , which is the attributive language with complement paired with qualified cardinality restrictions and data properties and contributes a novel initialization method for the initial population.
...
...

References

SHOWING 1-10 OF 289 REFERENCES
An Introduction to Inductive Logic Programming and Learning Language in Logic
TLDR
This chapter introduces Inductive Logic Programming (ILP) and Learning Language in Logic (LLL) and Elementarytopics are covered and more advanced topics are discussed.
Turning 30: New Ideas in Inductive Logic Programming
TLDR
This work focuses on new methods for learning recursive programs that generalise from few examples, a shift from using hand-crafted background knowledge to learning background knowledge, and the use of different technologies, notably answer set programming and neural networks.
The ILASP system for Inductive Learning of Answer Set Programs
TLDR
A comprehensive summary of the evolution of the ILASP system is presented, presenting the strengths and weaknesses of each version, with a particular emphasis on scalability.
Inductive Learning of Answer Set Programs
TLDR
A new paradigm for ILP is proposed that integrates existing notions of brave and cautious semantics within a unifying learning framework whose inductive solutions are Answer Set Programs and examples are partial interpretations.
Structured machine learning: the next ten years
TLDR
The goal of the current paper is to consider these emerging trends and chart out the strategic directions and open problems for the broader area of structured machine learning for the next 10 years.
Learning large logic programs by going beyond entailment
TLDR
Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs, can substantially outperform existing ILP systems, both in terms of predictive accuracies and learning times.
ILP: A Short Look Back and a Longer Look Forward
TLDR
It is the hypothesis that progress in each of these areas can greatly improve the contributions that can be made with ILP; and that, with assistance from research workers in other areas, significant progress is possible.
DeepProbLog: Neural Probabilistic Logic Programming
TLDR
This work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.
...
...