A tutorial on support vector regression

@article{Smola2004ATO,
  title={A tutorial on support vector regression},
  author={Alex Smola and Bernhard Sch{\"o}lkopf},
  journal={Statistics and Computing},
  year={2004},
  volume={14},
  pages={199-222}
}
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective. 
Relevance regression learning with support vector machines
We propose a variant of two SVM regression algorithms expressly tailored in order to exploit additional information summarizing the relevance of each data item, as a measure of its relativeExpand
Support Vector Selection for Regression Machines
In this paper, we propose a method to select support vectors to improve the performance of support vector regression machines. First, the orthogonal least-squares method is adopted to evaluate theExpand
Kernel methods: a survey of current techniques
TLDR
This tutorial survey this subject with a principal focus on the most well-known models based on kernel substitution, namely, support vector machines. Expand
A general formulation for support vector machines
  • Wei Chu, S. Keerthi, C. Ong
  • Mathematics
  • Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02.
  • 2002
In this paper, we derive a general formulation of support vector machines for classification and regression respectively. L/sub e/, loss function is proposed as a patch of L/sub 1/ and L/sub 2/ softExpand
Complex support vector regression
TLDR
This work employs the recently presented Wirtinger's calculus on complex RKHS to compute the Lagrangian and derive the dual problem, and proves that this approach is equivalent with solving two real SVR problems exploiting a specific real kernel. Expand
Computing the Solution Path for the Regularized Support Vector Regression
TLDR
An algorithm is derived that computes the entire solution path of the support vector regression, with the same computational cost as fitting one SVR model, which allows convenient selection of the regularization parameter. Expand
Inference for Support Vector Regression under ℓ1 Regularization
We provide large-sample distribution theory for support vector regression (SVR) with l1-norm along with error bars for the SVR regression coefficients. Although a classical Wald confidence intervalExpand
A Note on Least Squares Support Vector Machines
In this paper, we propose some improvements for the implementations of least squares support vector machine classifiers (LS-SVM). An improved conjugate gradient scheme is proposed for solving theExpand
Optimal Selection of the Regression Kernel Matrix with Semidefinite Programming
TLDR
Preliminary experimental results are presented for which the optimal kernel matrix for support vector machine regression is retrieved. Expand
Leave-One-Out Bounds for Support Vector Regression Model Selection
TLDR
Experiments demonstrate that the proposed bounds are competitive with Bayesian SVR for parameter selection and the differentiability of leave-one-out bounds is discussed. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 286 REFERENCES
A Tutorial on Support Vector Regression Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for regression and function estimation. Furthermore , we include a summary of currently used algorithmsExpand
The Relevance Vector Machine
TLDR
The Relevance Vector Machine is introduced, a Bayesian treatment of a generalised linear model of identical functional form to the SVM, and examples demonstrate that for comparable generalisation performance, the RVM requires dramatically fewer kernel functions. Expand
Regression estimation with support vector learning machines
TLDR
Support Vector Learning Machines are able to construct spline approximations of given data independently from the number of input-dimensions regarding complexity during training and with only linear complexity for reconstruction-compared to exponential complexity in conventional methods. Expand
The Support Vector Method
  • V. Vapnik
  • Mathematics, Computer Science
  • ICANN
  • 1997
TLDR
The general idea of the Support Vector method is described and theorems demonstrating that the generalization ability of the SV method is based on factors which classical statistics do not take into account are presented. Expand
New Support Vector Algorithms
TLDR
A new class of support vector algorithms for regression and classification that eliminates one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case. Expand
Support Vector Regression Machines
TLDR
This work compares support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space and expects that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space. Expand
Bounds on Error Expectation for Support Vector Machines
TLDR
It is proved that the value of the span is always smaller (and can be much smaller) than the diameter of the smallest sphere containing the support vectors, used in previous bounds. Expand
Probabilistic kernel regression models
TLDR
A class of exible conditional probability models and techniques for classi cation regression problems that comes from the use of kernel functions as in support vector machines and the generality from dual formulations of stan dard regression models is introduced. Expand
On the Optimal Parameter Choice for v-Support Vector Machines
  • Ingo Steinwart
  • Mathematics, Computer Science
  • IEEE Trans. Pattern Anal. Mach. Intell.
  • 2003
TLDR
It turns out that /spl nu/ should be a close upper estimate of twice the optimal Bayes risk provided that the classifier uses a so-called universal kernel such as the Gaussian RBF kernel. Expand
Sparseness of Support Vector Machines
TLDR
Lower (asymptotical) bounds on the number of support vectors are established and several results are proved which are of great importance for the understanding of SVMs. Expand
...
1
2
3
4
5
...