A new algorithm to automate inductive learning of default theories*

@article{Shakerin2017ANA,
  title={A new algorithm to automate inductive learning of default theories*},
  author={Farhad Shakerin and Elmer Salazar and Gopal Gupta},
  journal={Theory and Practice of Logic Programming},
  year={2017},
  volume={17},
  pages={1010 - 1026}
}
Abstract In inductive learning of a broad concept, an algorithm should be able to distinguish concept examples from exceptions and noisy data. An approach through recursively finding patterns in exceptions turns out to correspond to the problem of learning default theories. Default logic is what humans employ in common-sense reasoning. Therefore, learned default theories are better understood by humans. In this paper, we present new algorithms to learn default theories in the form of non… 
Cumulative Scoring-Based Induction of Default Theories
TLDR
The FOLD 2.0 algorithm is introduced – an enhanced version of the recently developed algorithm called FOLD, which is the first heuristic based, scalable, and noise-resilient ILP system to induce answer set programs.
Induction of Non-monotonic Logic Programs To Explain Statistical Learning Models
TLDR
A fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models and a significant improvement in terms of classification evaluation metrics and running time of the training algorithm compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system are suggested.
Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME
TLDR
A heuristic based algorithm to induce nonmonotonic logic programs that will explain the behavior of XGBoost trained classifiers and a proposed approach is agnostic to the choice of the ILP algorithm.
Heuristic Based Induction of Answer Set Programs, From Default theories to Combinatorial problems
TLDR
This paper extends previous work on learning stratified answer set programs that have a single stable model to learning arbitrary ones with multiple stable models, capable of inducing non-monotonic logic programs, examples of which includes programs for combinatorial problems such as graph-coloring and N-queens.
White-box Induction From SVM Models: Explainable AI with Logic Programming
TLDR
This work focuses on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm, and develops an algorithm that captures the SVM model’s underlying logic and outperforms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics.
Whitebox Induction of Default Rules Using High-Utility Itemset Mining
TLDR
A fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models and a significant improvement in terms of classification evaluation metrics and training time compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system are suggested.
A Clustering and Demotion Based Algorithm for Inductive Learning of Default Theories
TLDR
A combination of K-Means clustering and demotion strategy produces significant improvement for datasets with more than one cluster of positive examples, and the resulting induced program is also more concise and therefore easier to understand compared to the FOLD and ALEPH systems.
Induction of Non-Monotonic Rules From Statistical Learning Models Using High-Utility Itemset Mining
TLDR
A fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models and a significant improvement in terms of classification evaluation metrics and running time of the training algorithm compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system are suggested.
FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories from Mixed Data
TLDR
Experiments presented in this paper show that the improved FOLD-R++ algorithm is a significant improvement over the original design and that the s(CASP) system can make predictions in an efficient manner as well.
Formalizing Informal Logic and Natural Language Deductivism
TLDR
It is shown how the paradigm of answer set programming can be used to formalize all the concepts presented in Holloway and Wasson’s primer, and it is argued that recent advances in formal logic facilitate the formalization of the human thought process.
...
1
2
...

References

SHOWING 1-10 OF 43 REFERENCES
Learning Non-Monotonic Logic Programs: Learning Exceptions
TLDR
It is proved that the non-monotonic learning algorithm that realizes these ideas converges asymptotically to the concept to be learned.
Representation of Incomplete Knowledge by Induction of Default Theories
TLDR
An operational method to inductively construct a default theory from a set of examples and a background knowledge is proposed and relies on a generalization mechanism defined in the field of Inductive Logic Programming.
Experiments in Non-Monotonic Learning
TLDR
A new inductively generated solution giving 100% predictive accuracy is presented for the task of learning rules of illegality for the KRK chess end-game.
Induction from answer sets in nonmonotonic logic programs
TLDR
The proposed methods extend the present ILP techniques to a syntactically and semantically richer framework, and contribute to a theory of nonmonotonic ILP.
Distinguishing Exceptions From Noise in Non-Monotonic Learning
TLDR
The use of an information-theoretic measure is explored to decide whether to treat errors as noise or to include them as exceptions within a growing rst-order theory within the non-monotonic learning framework deened by Closed-World-Specialisation.
Machine Invention of First Order Predicates by Inverting Resolution
TLDR
A mechanism for automatically inventing and generalising first-order Horn clause predicates is presented and implemented in a system called CIGOL, which uses incremental induction to augment incomplete clausal theories.
A Further Note on Inductive Generalization
TLDR
The algorithm for finding the least generalization of two clauses, given in Plotkin (1970), is developed into a theory of inductive generalization, guided by ideas from the philosophy of science.
Nonmonotonic abductive inductive learning
  • O. Ray
  • Computer Science, Mathematics
    J. Appl. Log.
  • 2009
TLDR
This paper shows how ALP can be used to provide a semantics and proof procedure for nonmonotonic ILP that utilises practical methods of language and search bias to reduce the search space.
Stable ILP : Exploring the Added Expressivity of Negation in the Background Knowledge
We present stable ILP, a cross-disciplinary concept straddling machine learning and nonmonotonic reasoning. Stable models give meaning to logic programs containing negative assertions. In stable ILP,
The Need for Biases in Learning Generalizations
TLDR
The notion of bias in generalization problems is defined, and it is shown that biases are necessary for the inductive leap.
...
1
2
3
4
5
...