Statistical Comparisons of Classifiers over Multiple Data Sets

  • Janez Demsar
  • Published 2006 in Journal of Machine Learning Research

Abstract

While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.

Extracted Key Phrases

14 Figures and Tables

0500'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

6,504 Citations

Semantic Scholar estimates that this publication has 6,504 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Demsar2006StatisticalCO, title={Statistical Comparisons of Classifiers over Multiple Data Sets}, author={Janez Demsar}, journal={Journal of Machine Learning Research}, year={2006}, volume={7}, pages={1-30} }