#### Filter Results:

- Full text PDF available (7)

#### Publication Year

1979

1999

- This year (0)
- Last 5 years (0)
- Last 10 years (0)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Dana Angluin, Philip D. Laird
- Machine Learning
- 1987

The basic question addressed in this paper is: how can a learning algorithm cope with incorrect training examples? Specifically, how can algorithms that produce an “approximately correct” identification with “high probability” for reliable data be adapted to handle noisy data? We show that when the teacher may make independent random errors in classifying… (More)

- Philip D. Laird
- Machine Learning
- 1992

Learning from experience to predict sequences of discrete symbols is a fundamental problem in machine learning with many applications. We present a simple and practical algorithm (TDAG) for discrete sequence prediction. Based on a text-compression method, the TDAG algorithm limits the growth of storage by retaining the most likely prediction contexts and… (More)

- Philip D. Laird
- AAAI
- 1986

- Philip D. Laird, Evan Gamble
- AAAI
- 1990

We show that the familiar explanation-based generalization (EBG) procedure is applicable to a large family of programming languages, including three families of importance to AI: logic programming (such as Pro-log); lambda calculus (such as LISP); and combinator languages (such as FP). The main application of this result is to extend the algorithm to… (More)

- Philip D. Laird, Ronald Saul
- International Conference on Evolutionary…
- 1994

- Philip D. Laird
- COLT
- 1988

- Philip D. Laird, Ronald Saul, Peter Dunning
- COLT
- 1993

We study sequence extrapolation as an abstract learning problem. The task is to learn a stream—a semi-infinite sequence of values all of the same data type-from a finite initial segment (sl, S2,. . .,s~), We assume that all elements of the stream are of the same type (e.g., integers, strings, etc.). In order to represent the hypotheses, we define a language… (More)

- Philip D. Laird, Evan Gamble
- Machine Learning
- 1991

Kohonen and others have devised network algorithms for computing so-calledtopological feature maps. We describe a new algorithm, called theCDF-Inversion (CDFI) Algorithm, that can be used to learn feature maps and, in the process, approximate an unknown probalility distribution to within any specified accuracy. The primary advantags of the algorithm over… (More)

- Philip D. Laird
- Commun. ACM
- 1979

In [2], Hanani presents an algorithm to optimize the evaluation of Boolean expressions for each record of a large file. The principal idea is that the operands of the Boolean functions A (AND) and V (OR) can be evaluated in any order because of the commutativity and associa-tivity of the operators; an optimal order, therefore, is one which minimizes the… (More)