A Coefficient of Agreement for Nominal Scales

  title={A Coefficient of Agreement for Nominal Scales},
  author={Jacob Cohen},
  journal={Educational and Psychological Measurement},
  pages={37 - 46}
  • Jacob Cohen
  • Published 1 April 1960
  • Psychology
  • Educational and Psychological Measurement
CONSIDER Table 1. It represents in its formal characteristics a situation which arises in the clinical-social-personality areas of psychology, where it frequently occurs that the only useful level of measurement obtainable is nominal scaling (Stevens, 1951, pp. 2526), i.e. placement in a set of k unordered categories. Because the categorizing of the units is a consequence of some complex judgment process performed by a &dquo;two-legged meter&dquo; (Stevens, 1958), it becomes important to… 
Agreement among 2 × 2 Agreement Indices
A variety of measures of reliability for two-category nominal scales are reviewed and compared. It is shown that upon correcting these indices for chance agreement, there are only five distinct
A new index for the comparison of different measurement scales
In psychometric sciences, a common problem is the choice of a good response scale. Every scale has, by its nature, a propensity to lead a respondent to mainly positive- or negative- ratings. This
A computer program to compute measures of response agreement for nominal scale data obtained from two judges
The program described in this paper calculates both the kappa (and associated statistics) and the Ap statistics, measures of response agreement for nominal scale data obtained from two judges.
The Effect of Number of Rating Scale Categories on Levels of Interrater Reliability : A Monte Carlo Investigation
A computer simulation study was designed to in vestigate the extent to which the interrater reliability of a clinical scale is affected by the number of cate gories or scale points (2, 3, 4, ...
A review of statistical methods in the analysis of data arising from observer reliability studies (Part II)*
Many research designs in studies of observer reliability give rise to categorical data via nomial scales(e.g., states of mental health such as normal, neurosis, and depression) or ordinal scales
Coefficients for Interrater Agreement
The degree of agreement between two raters who rate a number of objects on a certain characteristic can be expressed by means of an association coefficient (e.g., the product-moment correlation). A
Estimating Rater Agreement in 2 x 2 Tables: Correction for Chance and Intraclass Correlation
Many estimators of the measure of agreement between two dichotomous ratings of a person have been proposed. The results of Fleiss (1975) are extended, and it is shown that four estimators— Scott's
Computing inter-rater reliability and its variance in the presence of high agreement.
  • K. Gwet
  • Psychology
    The British journal of mathematical and statistical psychology
  • 2008
This paper explores the origin of these limitations, and introduces an alternative and more stable agreement coefficient referred to as the AC1 coefficient, and proposes new variance estimators for the multiple-rater generalized pi and AC1 statistics, whose validity does not depend upon the hypothesis of independence between raters.
Clinical Agreement in Qualitative Measurements
The kappa-like coefficients (intraclass kappa, Cohen’s kappa and weighted kappa), usually used to assess agreement between or within raters on a categorical scale, are reviewed in this chapter with emphasis on the interpretation and the properties of these coefficients.


Reliability of Content Analysis ; The Case of Nominal Scale Cording
Problems and methods of psychophysics.
Mathematics, Measurement and Psychophysics Handbook of Experimental Psychology
  • Mathematics, Measurement and Psychophysics Handbook of Experimental Psychology
  • 1951
An Outline of the Statistical Theory of Prediction The Prediction of Personal Adjustment
  • An Outline of the Statistical Theory of Prediction The Prediction of Personal Adjustment
  • 1941
An Outline of the Statistical Theory of Prediction.
  • New York: Social Science Research Council,
  • 1941