Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM

Abstract

Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few have focused on integrating feature selection into the learning process. In this paper, we propose a general framework for feature selection in learning to rank using support vector machines with a sparse regularization term. We investigate both classical convex regularizations, such as &#x2113;<sub>1</sub> or weighted &#x2113;<sub>1</sub>, and nonconvex regularization terms, such as log penalty, minimax concave penalty, or &#x2113;<sub>p</sub> pseudo-norm with p&lt;;1. Two algorithms are proposed: the first, an accelerated proximal approach for solving the convex problems, and, the second, a reweighted &#x2113;<sub>1</sub> scheme to address nonconvex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of nonconvex regularizations we propose leads to more sparsity in the resulting models while preserving the prediction performance. The number of features is decreased by up to a factor of 6 compared to the &#x2113;<sub>1</sub> regularization. In addition, the software is publicly available on the web.

DOI: 10.1109/TNNLS.2013.2286696

Extracted Key Phrases

7 Figures and Tables

0102030201520162017
Citations per Year

60 Citations

Semantic Scholar estimates that this publication has 60 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Laporte2014NonconvexRF, title={Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM}, author={L{\'e}a Laporte and R{\'e}mi Flamary and St{\'e}phane Canu and S{\'e}bastien D{\'e}jean and Josiane Mothe}, journal={IEEE Transactions on Neural Networks and Learning Systems}, year={2014}, volume={25}, pages={1118-1130} }