#### Filter Results:

- Full text PDF available (107)

#### Publication Year

1981

2017

- This year (2)
- Last 5 years (19)
- Last 10 years (48)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Pedro F. Felzenszwalb, David A. McAllester, Deva Ramanan
- 2008 IEEE Conference on Computer Vision and…
- 2008

This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily… (More)

Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value… (More)

- David A. McAllester, David Rosenblitt
- AAAI
- 1991

This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning. Simplicity is achieved by starting with a ground procedure and then applying a general, and independently veriiable, lifting transformation. Previous planners have been designed directly as lifted procedures. Our ground procedure is a ground… (More)

- Pedro F. Felzenszwalb, Ross B. Girshick, David A. McAllester
- 2010 IEEE Computer Society Conference on Computer…
- 2010

We describe a general method for building cascade classifiers from part-based deformable models such as pictorial structures. We focus primarily on the case of star-structured models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection by more than one order of magnitude without sacrificing detection accuracy. In… (More)

- David A. McAllester, Robert E. Schapire
- COLT
- 2000

Good-Turing adjustments of word frequencies are an important tool in natural language modeling. In particular, for any sample of words, there is a set of words not occuring in that sample. The total probability mass of the words not in the sample is the so-called missing mass. Good showed that the fraction of the sample consisting of words that occur only… (More)

- David A. McAllester
- Machine Learning
- 1998

This paper gives PAC guarantees for “Bayesian” algorithms—algorithms that optimize risk minimization expressions involving a prior probability and a likelihood for the training data. PAC-Bayesian algorithms are motivated by a desire to provide an informative prior encoding information about the expected experimental setting but still having PAC performance… (More)

- David A. McAllester, Bart Selman, Henry A. Kautz
- AAAI/IAAI
- 1997

It is well known that the performance of a stochastic local search procedure depends upon the setting of its noise parameter , and that the optimal setting varies with the problem distribution. It is therefore desirable to develop general priniciples for tuning the procedures. We present two statistical measures of the local search process that allow one to… (More)

The rule-based bootstrapping introduced by Yarowsky, and its co-training variant by Blum and Mitchell, have met with considerable empirical success. Earlier work on the theory of co-training has been only loosely related to empirically useful co-training algorithms. Here we give a new PAC-style bound on generalization error which justifies both the use of… (More)

- David A. McAllester
- Machine Learning
- 2003

PAC-Bayesian learning methods combine the informative priors of Bayesian methods with distribution-free PAC guarantees. Stochastic model selection predicts a class label by stochastically sampling a classifier according to a “posterior distribution” on classifiers. This paper gives a PAC-Bayesian performance guarantee for stochastic model selection that is… (More)

- Alexander T. Ihler, David A. McAllester
- AISTATS
- 2009

The popularity of particle filtering for inference in Markov chain models defined over random variables with very large or continuous domains makes it natural to consider sample–based versions of belief propagation (BP) for more general (tree–structured or loopy) graphs. Already, several such algorithms have been proposed in the literature. However, many… (More)