Training linear SVMs in linear time

Abstract

Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples <i>n</i> as well as a large number of features <i>N</i>, while each example has only <i>s</i> &lt;&lt; <i>N</i> non-zero features. This paper presents a Cutting Plane Algorithm for training linear SVMs that provably has training time <i>0(s,n)</i> for classification problems and <i>o</i>(<i>sn</i> log (<i>n</i>))for ordinal regression problems. The algorithm is based on an alternative, but equivalent formulation of the SVM optimization problem. Empirically, the Cutting-Plane Algorithm is several orders of magnitude faster than decomposition methods like svm light for large datasets.

DOI: 10.1145/1150402.1150429

Extracted Key Phrases

10 Figures and Tables

Showing 1-10 of 1,011 extracted citations
0100200300'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

2,088 Citations

Semantic Scholar estimates that this publication has received between 1,852 and 2,352 citations based on the available data.

See our FAQ for additional information.