#### Filter Results:

- Full text PDF available (218)

#### Publication Year

1979

2017

- This year (4)
- Last 5 years (50)
- Last 10 years (113)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Thomas G. Dietterich, Richard H. Lathrop, Tomás Lozano-Pérez
- Artif. Intell.
- 1997

The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classiication of the object. This paper describes and compares three kinds of algorithms that… (More)

- Thomas G. Dietterich, Ghulum Bakiri
- J. Artif. Intell. Res.
- 1995

Multiclass learning problems involve nding a deenition for an unknown function f (x) whose range is a discrete set containing k > 2 values (i.e., k \classes"). The deenition is acquired by studying collections of training examples of the form hx i ; f (x i)i. Existing approaches to multiclass learning problems include direct application of multiclass… (More)

- Thomas G. Dietterich
- J. Artif. Intell. Res.
- 2000

This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a… (More)

- Thomas G. Dietterich
- Neural Computation
- 1998

This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have… (More)

- Thomas G. Dietterich
- Machine Learning
- 2000

Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions… (More)

- Thomas G. Dietterich
- Multiple Classifier Systems
- 2000

Ensemble methods are learning algorithms that construct a set of classiiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging , but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why e n s… (More)

Machine Learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classiication accuracy by learning ensembles of classiiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and… (More)

- Hussein Almuallim, Thomas G. Dietterich
- AAAI
- 1991

In many domains, an appropriate inductive bias is the MIN-FEATURES bias, which prefers consistent hypotheses deenable over as few features as possible. This paper deenes and studies this bias. First, it is shown that any learning algorithm implementing the MIN-FEATURES bias requires (1 ln 1 + 1 2 p + p lnn]) training examples to guarantee PAC-learning a… (More)

- Thomas G. Dietterich
- ICML
- 1998

This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics—as a subroutine hierarchy—and a declarative semantics—as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on… (More)