#### Filter Results:

#### Publication Year

1983

2014

#### Publication Type

#### Co-author

#### Key Phrase

#### Publication Venue

#### Data Set Used

Learn More

I present several computational complexity results for propositional STRIPS planning, i.e., STRIPS planning restricted to ground formulas. Diierent planning problems can be deened by restricting the type of formulas, placing limits on the number of pre-and postconditions, by restricting negation in pre-and postconditions, and by requiring optimal plans. For… (More)

I describe several computational complexity results for planning, some of which identify tractable planning problems. The model of planning, called "propositional planning," is simple—conditions within operators are literals with no variables allowed. The different plan ning problems are defined by different restrictions on the preconditions and… (More)

The problem of abduction can be characterized as nding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is… (More)

The ability to map the state of an object into a category languages is transforming AI theories into symbolic struc-in a classification hierarchy has long been an important tures. This pattern can be seen in knowledge representa-part of many fields, for example, biology and medicine. tion (for example, semantic nets and KL-ONE [Brachman Recently, AI… (More)

I present a probabilistic analysis of propositional STRIPS planning. The analysis considers two assumptions. One is that each possible precondition (likewise postcondition) of an operator is selected independently of other pre-and postconditions. The other is that each operator has a xed number of preconditions (likewise postconditions). Under both… (More)

- Tom Bylander
- COLT
- 1997

This paper develops and analyzes a new online algorithm for learning linear functions, called the Binary Exponentiated Gradient algorithm (BEG). BEG imposes an lower and upper bound for all the weights. Using Kivinen and Warmuth's methodology , the BEG algorithm is developed from a binary entropy distance function and the square loss function , and… (More)

For two-class datasets, we provide a method for estimating the generalization error of a bag using out-of-bag estimates. In bagging, each predictor (single hypothesis) is learned from a bootstrap sample of the training examples; the output of a bag (a set of predictors) on an example is determined by voting. The out-of-bag estimate is based on recording the… (More)