Learn More
The basic question addressed in this paper is: how can a learning algorithm cope with incorrect training examples? Specifically, how can algorithms that produce an “approximately correct” identification with “high probability” for reliable data be adapted to handle noisy data? We show that when the teacher may make independent random errors in classifying(More)
We show that the familiar explanation-based generalization (EBG) procedure is applicable to a large family of programming languages, including three families of importance to AI: logic programming (such as Pro-log); lambda calculus (such as LISP); and combinator languages (such as FP). The main application of this result is to extend the algorithm to(More)
Kohonen and others have devised network algorithms for computing so-calledtopological feature maps. We describe a new algorithm, called theCDF-Inversion (CDFI) Algorithm, that can be used to learn feature maps and, in the process, approximate an unknown probalility distribution to within any specified accuracy. The primary advantags of the algorithm over(More)
In [2], Hanani presents an algorithm to optimize the evaluation of Boolean expressions for each record of a large file. The principal idea is that the operands of the Boolean functions A (AND) and V (OR) can be evaluated in any order because of the commutativity and associa-tivity of the operators; an optimal order, therefore, is one which minimizes the(More)