Corpus ID: 53603642

We agreed to measure agreement - Redefining reliability de-justifies Krippendorff’s alpha

@inproceedings{Zhao2018WeAT,
  title={We agreed to measure agreement - Redefining reliability de-justifies Krippendorff’s alpha},
  author={X. Zhao and G. Feng and J. Liu and Ke Deng},
  year={2018}
}
Zhao, Liu, & Deng (2013) reviewed 22 inter-coder reliability indices, and found that each makes unrealistic assumption(s) about coder behavior, leading to paradoxes and abnormalities. Krippendorff’s α makes more of such assumptions, consequently produces more paradoxes and abnormalities than any other index. Professor Krippendorff (2013) countered that “most of the authors’ discoveries are the artifacts of being led astray by strange, almost conspiratorial uses of language.” The commentary… Expand

References

SHOWING 1-10 OF 114 REFERENCES
Indexing versus modeling intercoder reliability
Commentary: A Dissenting View on So-Called Paradoxes of Reliability Coefficients
When to use Scott’s π or Krippendorff's α, if ever?
Computing Krippendorff's Alpha-Reliability
Interrater agreement statistics with skewed data: evaluation of alternatives to Cohen's kappa.
  • Shu Xu, M. Lorber
  • Medicine, Psychology
  • Journal of consulting and clinical psychology
  • 2014
Answering the Call for a Standard Reliability Measure for Coding Data
An Evaluation of Interrater Reliability Measures on Binary Tasks Using d-Prime
...
1
2
3
4
5
...