• Publications
  • Influence
Object Detection with Discriminatively Trained Part Based Models
TLDR
We describe an object detection system based on mixtures of multiscale deformable part models that achieves state-of-the-art results in the PASCAL object detection challenges. Expand
  • 8,587
  • 1401
  • PDF
Policy Gradient Methods for Reinforcement Learning with Function Approximation
TLDR
Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. Expand
  • 3,354
  • 378
  • PDF
A discriminatively trained, multiscale, deformable part model
TLDR
We propose a discriminatively trained, multiscale, deformable part model for object detection. Expand
  • 2,371
  • 277
  • PDF
Cascade object detection with deformable part models
TLDR
We describe a general method for building cascade classifiers from part-based deformable models and show how a simple algorithm based on partial hypothesis pruning can speed up object detection by more than one order of magnitude. Expand
  • 804
  • 99
  • PDF
Resolution Theorem Proving
TLDR
Lifting is one of the most important general techniques for accelerating dii-cult computations. Expand
  • 442
  • 63
Systematic Nonlinear Planning
TLDR
This paper presents a simple, sound, complete, and systematic algorithm for domain independent STRIPS planning by starting with a ground procedure and then applying a general, and independently verifiable, lifting transformation. Expand
  • 712
  • 58
  • PDF
Some PAC-Bayesian Theorems
TLDR
This paper gives PAC guarantees for “Bayesian” algorithms—algorithms that optimize risk minimization expressions involving a prior probability and a likelihood for the training data. Expand
  • 334
  • 53
  • PDF
Evidence for Invariants in Local Search
TLDR
This paper presents empirical evidence that such useful invariants (i.e.,properties that hold across strategies and domains) do indeed exist. Expand
  • 443
  • 52
  • PDF
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
TLDR
We present and prove a margin based generalization bound for feedforward neural networks, that depends on the product of the spectral norm of the weights in each layer, as well as the Frobenius norm of weights. Expand
  • 264
  • 44
  • PDF