Measuring classifier performance: a coherent alternative to the area under the ROC curve

@article{Hand2009MeasuringCP,
  title={Measuring classifier performance: a coherent alternative to the area under the ROC curve},
  author={David J. Hand},
  journal={Machine Learning},
  year={2009},
  volume={77},
  pages={103-123}
}
The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This… CONTINUE READING
BETA

Citations

Publications citing this paper.
SHOWING 1-10 OF 330 CITATIONS, ESTIMATED 91% COVERAGE

Lift Up and Act! Classifier Performance in Resource-Constrained Applications

  • ArXiv
  • 2019
VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

Measuring classification performance : the hmeasure package

VIEW 8 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

Unsupervised dimensionality reduction versus supervised regularization for classification from sparse data

  • Data Mining and Knowledge Discovery
  • 2019
VIEW 8 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

A Robust profit measure for binary classification model evaluation

VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

FILTER CITATIONS BY YEAR

2010
2019

CITATION STATISTICS

  • 67 Highly Influenced Citations

  • Averaged 33 Citations per year from 2017 through 2019

Similar Papers

Loading similar papers…