Cutting-plane training of structural SVMs


Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at .

DOI: 10.1007/s10994-009-5108-8

Extracted Key Phrases

11 Figures and Tables

Showing 1-10 of 31 references

Maximum-margin Markov networks Max-margin parsing Structured prediction via the extragradient method A scalable modular convex solver for regularized risk minimization

  • B Taskar, C Guestrin, +5 authors Mi Jordan
  • 2003
Highly Influential
4 Excerpts
Showing 1-10 of 580 extracted citations
Citations per Year

999 Citations

Semantic Scholar estimates that this publication has received between 867 and 1,152 citations based on the available data.

See our FAQ for additional information.