Rounding Methods for Discrete Linear Classification(Extended Version)

Abstract

Learning discrete linear classifiers is known as a difficult challenge. In this paper, this learning task is cast as combinatorial optimization problem: given a training sample formed by positive and negative feature vectors in the Euclidean space, the goal is to find a discrete linear function that minimizes the cumulative hinge loss of the sample. Since this problem is NP-hard, we examine two simple rounding algorithms that discretize the fractional solution of the problem. Generalization bounds are derived for several classes of binary-weighted linear functions, by analyzing the Rademacher complexity of these classes and by establishing approximation bounds for our rounding algorithms. Our methods are evaluated on both synthetic and real-world data.

Extracted Key Phrases

7 Figures and Tables

Cite this paper

@inproceedings{Chevaleyre2013RoundingMF, title={Rounding Methods for Discrete Linear Classification(Extended Version)}, author={Yann Chevaleyre and Fr{\'e}d{\'e}ric Koriche}, year={2013} }