Learn More
The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classiication of the object. This paper describes and compares three kinds of algorithms that(More)
Multiclass learning problems involve nding a deenition for an unknown function f (x) whose range is a discrete set containing k > 2 values (i.e., k \classes"). The deenition is acquired by studying collections of training examples of the form hx i ; f (x i)i. Existing approaches to multiclass learning problems include direct application of multiclass(More)
This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a(More)
This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have(More)
Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions(More)
Ensemble methods are learning algorithms that construct a set of classiiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging , but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why e n s(More)