Birgit Tausend

Learn More
As each of the four main approaches to a declarative bias represention in Inductive Logic Programming (ILP), the representation by parameterized languages or by clause sets, the grammar-based and the scheme-based representation, fails in representing all language biases in ILP systems, we present a unifying representation language MILES-CTL for these biases(More)
In inductive learning, the shift of the representation language of the hypotheses from attribute-value languages to Horn clause logic used by Inductive Logic Programming systems accounts for a very complex hypothesis space. In order to reduce this complexity, most of these systems use biases. In this paper, we study the innuence of these biases on the size(More)
The authors describe a method for learning disjunctive concepts represented as Horn clauses in a general-to-specific manner. They have identified a restricted class of Horn clauses for which positive examples are sufficient to detect overgeneral clauses. The method, developed and implemented in a system called INDICO, extracts as much constraining(More)
Restrictions on the number and depth of existential variables as deened in the language series of Clint 3] or the ij-determinacy constraint of Golem 2] are widely used in ILP and expected to produce a considerable reduction in the size of the hypothesis space (see 1] for an empirical comparison). In this paper we will show that this expectation does not(More)