Sparse correlation kernel reconstruction

@article{Papageorgiou1999SparseCK,
  title={Sparse correlation kernel reconstruction},
  author={Constantine Papageorgiou and Federico Girosi and Tomaso A. Poggio},
  journal={1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258)},
  year={1999},
  volume={3},
  pages={1633-1636 vol.3}
}
  • C. Papageorgiou, F. Girosi, T. Poggio
  • Published 15 March 1999
  • Computer Science
  • 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258)
This paper presents a new paradigm for signal reconstruction and superresolution, correlation kernel analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class-specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use support vector machine (SVM) regression and compare this to traditional principal component… 

Figures from this paper

Algorithms for training large-scale linear programming support vector regression and classification

The main contribution of this dissertation is the development of a method to train a Support Vector Regression (SVR) model for the large-scale case where the number of training samples supersedes the

Linear dependency between ε and the input noise in ε-support vector regression

TLDR
The resultant predicted choice of /spl epsi/ is much closer to the experimentally observed optimal value, while again demonstrating a linear trend with the input noise.

References

SHOWING 1-10 OF 13 REFERENCES

Sparse Correlation Kernel Analysis and Reconstruction

TLDR
A new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions, which concludes that when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction andsuperresolution.

A general framework for object detection

TLDR
A general trainable framework for object detection in static images of cluttered scenes based on a wavelet representation of an object class derived from a statistical analysis of the class instances and a motion-based extension to enhance the performance of the detection algorithm over video sequences is presented.

Object and pattern detection in video sequences

TLDR
A general trainable framework for object detection in static images of cluttered scenes and a novel motion based extension that enhances performance over video sequences that is based on the realization that in regions of motion, the likely classes of objects are limited, so the strictness of the classifier is relaxed.

Pedestrian detection using wavelet templates

TLDR
This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes and shows how the invariant properties and computational efficiency of the wavelet template make it an effective tool for object detection.

An Equivalence Between Sparse Approximation and Support Vector Machines

  • F. Girosi
  • Computer Science
    Neural Computation
  • 1998
TLDR
If the data are noiseless, the modified version of basis pursuit denoising proposed in this article is equivalent to SVM in the following sense: if applied to the same data set, the two techniques give the same solution, which is obtained by solving the same quadratic programming problem.

A Sparse Representation for Function Approximation

We derive a new general representation for a function as a linear combination of local correlation kernels at optimal sparse locations (and scales) and characterize its relation to principal

Support Vector Machines: Training and Applications

TLDR
Preliminary results are presented obtained applying SVM to the problem of detecting frontal human faces in real images, and the main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm.

Local feature analysis: A general statistical theory for object representation

Low-dimensional representations of sensory signals are key to solving many of the computational problems encountered in high-level vision. Principal component analysis (PCA) has been used in the pa...

Spline Models for Observational Data

Foreword 1. Background 2. More splines 3. Equivalence and perpendicularity, or, what's so special about splines? 4. Estimating the smoothing parameter 5. 'Confidence intervals' 6. Partial spline

The Nature of Statistical Learning Theory

  • V. Vapnik
  • Computer Science
    Statistics for Engineering and Information Science
  • 2000
Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing