Bias and stability of single variable classifiers for feature ranking and selection

Abstract

Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.

DOI: 10.1016/j.eswa.2014.05.007

Extracted Key Phrases

12 Figures and Tables

Cite this paper

@article{Fakhraei2014BiasAS, title={Bias and stability of single variable classifiers for feature ranking and selection}, author={Shobeir Fakhraei and Hamid Soltanian-Zadeh and Farshad Fotouhi}, journal={Expert systems with applications}, year={2014}, volume={14 15}, pages={6945-6958} }