• Corpus ID: 87929

A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification

@inproceedings{Grandvalet2005API,
  title={A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification},
  author={Yves Grandvalet and Johnny Mari{\'e}thoz and Samy Bengio},
  booktitle={NIPS},
  year={2005}
}
In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities… 

Figures and Tables from this paper

Sparse probabilistic classifiers

This work proposes an adaptation of maximum likelihood estimation, instantiated on logistic regression, which outputs proper conditional probabilities into a user-defined interval and is less precise elsewhere.

Support Vector Machines with a Reject Option

The problem of binary classification where the classifier may abstain instead of classifying each observation is considered, and the double hinge loss function that focuses on estimating conditional probabilities only in the vicinity of the threshold points of the optimal decision rule is derived.

Handling uncertainties in SVM classification

This paper addresses the pattern classification problem arising when available target data include some uncertainty information by allowing to take into account class label through a hinge loss as well as probability estimates using ε-insensitive cost function together with a minimum norm (maximum margin) objective.

An application of bayesian model averaging to histograms

A method for learning class posterior probability for onedimensional input data using the Bayesian Model Averaging method, which achieves an optimal trade-off between complexity and accuracy by averaging over all possible models.

Support Vector Machines as Probabilistic Models

We show how the SVM can be viewed as a maximum likelihood estimate of a class of probabilistic models. This model class can be viewed as a reparametrization of the SVM in a similar vein to the ?-SVM

Optimizing F-Measures by Cost-Sensitive Classification

A general reduction of F-measure maximization to cost-sensitive classification with unknown costs is presented and an algorithm with provable guarantees to obtain an approximately optimal classifier for the F-measures is proposed by solving a series of cost- sensitive classification problems.

AIC and BIC based approaches for SVM parameter value estimation with RBF kernels

Two alternative approaches for calculating the necessary likelihood functions for these formulas for AIC and BIC are presented, based on using the distances of support vectors from the separating hyperplane and the disposition of points in the kernel feature space.

Probabilistic Novelty Detection With Support Vector Machines

The development of a Probabilistic calibration technique for one-class SVMs, such that on-line novelty detection may be performed in a probabilistic manner, and the demonstration of the advantages of the proposed method (in comparison to the conventional one- class SVM methodology) using case studies.
...

References

SHOWING 1-10 OF 13 REFERENCES

Probabilistic Methods for Support Vector Machines

I describe a framework for interpreting Support Vector Machines (SVMs) as maximum a posteriori (MAP) solutions to inference problems with Gaussian Process priors. This can provide intuitive

Statistical behavior and consistency of classification methods based on convex risk minimization

This study sheds light on the good performance of some recently proposed linear classification methods including boosting and support vector machines and shows their limitations and suggests possible improvements.

Support Vector Machines for Classification in Nonstandard Situations

This paper explains why the standard support vectors machine is not suitable for the nonstandard situation, and introduces a simple procedure for adapting the support vector machine methodology to the non standard situation.

Discriminant Analysis and Statistical Pattern Recognition

Provides a systematic account of the subject area, concentrating on the most recent advances in the field. While the focus is on practical considerations, both theoretical and practical issues are

Combining Statistical Learning with a Knowledge-Based Approach - A Case Study in Intensive Care Monitoring

The paper describes a case study in combining different methods for acquiring medical knowledge. Given a huge amount of noisy, high dimensional numerical time series data describing patients in

Controlling the Sensitivity of Support Vector Machines

Two schemes for adjusting the sensitivity and speciicity of Support Vector Machines and the description of their performance using receiver operating characteristic (ROC) curves are discussed and their use on real-life medical diagnostic tasks is illustrated.

Probabilities for SV Machines

This chapter contains sections titled: Introduction, Fitting a Sigmoid After the SVM, Empirical Tests, Conclusions, Appendix: Pseudo-code for the Sigmoid Training

Torch: a modular machine learning software library

Keywords: learning Reference EPFL-REPORT-82802 URL: http://publications.idiap.ch/downloads/reports/2002/rr02-46.pdf Record created on 2006-03-10, modified on 2017-05-10

Semi‐Parametric Models