Assessing agreement on classification tasks: the kappa statistic

Abstract

Currently, computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics, none of which are easily interpretable or comparable to each other. Meanwhile, researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic. We discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science, and argue that we would be better off as a field adopting techniques from content analysis.

Extracted Key Phrases

050100150'97'99'01'03'05'07'09'11'13'15'17
Citations per Year

2,293 Citations

Semantic Scholar estimates that this publication has 2,293 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Carletta1996AssessingAO, title={Assessing agreement on classification tasks: the kappa statistic}, author={Jean Carletta}, journal={Computational Linguistics}, year={1996}, volume={22}, pages={249-254} }