Doug Fisher

Learn More
The ability to restructure a decision tree eeciently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI).(More)
ii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a dissertation for the degree of Doctor of Philosophy. (Principal Advisor) I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a dissertation for the degree of Doctor of(More)
To be effective, a navigation planner must have knowledge not only of the effects an action will have, but also the effects that the environment will have on that action (e.g. the robot may travel more slowly over rough terrain). To address this issue, we have developed an approach called ERA which uses regression tree induction to learn action models that(More)
Learning methods vary in the optimism or pessimism with which they regard the informativeness of learned knowledge. Pessimism is implicit in hypothesis testing, where we wish to draw cautious conclusions from experimental evidence. However, this paper demonstrates that optimism in the utility of derived rules may be the preferred bias for learning systems(More)
We investigate the problem of keeping the plans of multiple agents synchronized during execution. We assume that agents only have a partial view of the overall plan. They know the tasks they must perform, and know the tasks of other agents with whom they have direct dependencies. Initially, agents are given a schedule of tasks to perform together with a(More)
We present a way of approximating the posterior probability of a rule-set model that is comprised of a set of class descriptions. Each class description, in turn, consists of a set of relational rules. The ability to compute this posterior and to learn many models from the same training set allows us to approximate the expectation that an example to be(More)