• Corpus ID: 18986102

Introduction to Support Vector Machines

@inproceedings{Boswell2002IntroductionTS,
  title={Introduction to Support Vector Machines},
  author={Dustin Boswell},
  year={2002}
}
Support Vector Machines (SVM’s) are a relatively new learning method used for binary classification. The basic idea is to find a hyperplane which separates the d-dimensional data perfectly into its two classes. However, since example data is often not linearly separable, SVM’s introduce the notion of a “kernel induced feature space” which casts the data into a higher dimensional space where the data is separable. Typically, casting into such a space would cause problems computationally, and… 

Figures from this paper

Least squares support vector machine regression for discriminant analysis

Support vector machine (SVM) classifiers aim at constructing a large margin classifier in the feature space, while a nonlinear decision boundary is obtained in the input space by mapping the inputs

Support Vector Machines

The intuition is explained and the intuition and then the primal problem is explained, and how to translate the Primal problem to dual problem, and what kind of problems and knowledge will be covered by this node.

SVM-like decision theoretical classification of high-dimensional vectors

Optimal Parameter Selection in Support Vector Machines 1

A nonlinear programming algorithm is applied to compute kernel and related parameters of a support vector machine (SVM) by a two-level approach, showing a significant reduction of the generalization error, an increase of the margin, and the number of support vectors in all cases where the data sets are sufficiently large.

F support vector machines

Fβ SVMs, a new parametrization of support vector machines that allows to optimize a SVM in terms of Fβ, a classical information retrieval criterion, instead of the usual classification rate, is introduced.

Optimal parameter selection in support vector machines

A nonlinear programming algorithm for computing kernel and related parameters of a support vector machine (SVM) by a two-level approach shows a significant reduction of the generalization error, an increase of the margin, and a reduced of the number of support vectors in all cases where the data sets are sufficiently large.

Building Sparse Multiple-Kernel SVM Classifiers

Experiments on a large number of toy and real-world data sets show that the resultant classifier is compact and accurate, and can also be easily trained by simply alternating linear program and standard SVM solver.

Reduction Techniques for Training Support Vector Machines

This thesis shows that the formulation of each technique is already in a form of linear SVM and discusses several suitable implementations, and indicates that in general the test accuracy of both techniques is a little lower than that of the standard SVM.

Classification with Support Hyperplanes

To solve the binary classification task, SHs considers the set of all hyperplanes that do not make classification mistakes, referred to as semi-consistent hyperplanes, to achieve a good balance between goodness-of-fit and model complexity.
...

References

SHOWING 1-10 OF 20 REFERENCES

A Tutorial on Support Vector Machines for Pattern Recognition

  • C. Burges
  • Computer Science
    Data Mining and Knowledge Discovery
  • 2004
There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.

Training support vector machines: an application to face detection

  • E. OsunaR. FreundF. Girosi
  • Computer Science
    Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 1997
A decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets is presented, and the feasibility of the approach on a face detection problem that involves a data set of 50,000 data points is demonstrated.

An improved training algorithm for support vector machines

  • E. OsunaR. FreundF. Girosi
  • Computer Science
    Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop
  • 1997
This paper presents a decomposition algorithm that is guaranteed to solve the QP problem and that does not make assumptions on the expected number of support vectors.

Support-Vector Networks

High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

A geometric approach to train support vector machines

  • Ming-Hsuan YangN. Ahuja
  • Computer Science
    Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662)
  • 2000
Experimental results show that the proposed method to extract a small superset of support vectors, which are called guard vectors, to construct the optimal decision surface is more efficient than conventional methods using QPs and requires much less memory.

Fast training of support vector machines using sequential minimal optimization, advances in kernel methods

SMO breaks this large quadratic programming problem into a series of smallest possible QP problems, which avoids using a time-consuming numerical QP optimization as an inner loop and hence SMO is fastest for linear SVMs and sparse data sets.

A fast iterative nearest point algorithm for support vector machine classifier design

Comparative computational evaluation of the new fast iterative algorithm against powerful SVM methods such as Platt's sequential minimal optimization shows that the algorithm is very competitive.

Efficient SVM Regression Training with SMO

This work generalizes SMO so that it can handle regression problems, and addresses problems with several modifications that enable caching to be effectively used with SMO.

Making large scale SVM learning practical

This chapter presents algorithmic and computational results developed for SVM light V 2.0, which make large-scale SVM training more practical and give guidelines for the application of SVMs to large domains.

A training algorithm for optimal margin classifiers

A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions,