Hierarchical modeling of agreement.
@article{Vanbelle2012HierarchicalMO,
title={Hierarchical modeling of agreement.},
author={Sophie Vanbelle and Timothy Mutsvari and Dominique Declerck and Emmanuel Lesaffre},
journal={Statistics in medicine},
year={2012},
volume={31 28},
pages={
3667-80
}
}Kappa-like agreement indexes are often used to assess the agreement among examiners on a categorical scale. They have the particularity of correcting the level of agreement for the effect of chance. In the present paper, we first define two agreement indexes belonging to this family in a hierarchical context. In particular, we consider the cases of a random and fixed set of examiners. Then, we develop a method to evaluate the influence of factors on these indexes. Agreement indexes are directly…
14 Citations
Modeling agreement on categorical scales in the presence of random scorers.
- MathematicsBiostatistics
- 2016
A partial-Bayesian methodology is developed to directly relate these agreement coefficients to predictors through a multilevel model and is applied to gynecological and medical imaging data.
Bayesian approaches to the weighted kappa-like inter-rater agreement measures
- PsychologyStatistical methods in medical research
- 2021
The Bayesian approaches make it possible to include prior information on the assessment behaviour of the raters in the analysis and impose order restrictions on the row and column scores, which improves the accuracy of the agreement measures and mitigate the impact of the anomalies in the estimation of the strength of agreement between the rater.
Comparing dependent kappa coefficients obtained on multilevel data
- EngineeringBiometrical journal. Biometrische Zeitschrift
- 2017
The present paper provides two simple alternatives to more advanced modeling techniques, which are not always adequate in case of a very limited number of subjects, when comparing several dependent kappa coefficients obtained on multilevel data.
Clinical Agreement in Qualitative Measurements
- Psychology, Medicine
- 2013
The kappa-like coefficients (intraclass kappa, Cohen’s kappa and weighted kappa), usually used to assess agreement between or within raters on a categorical scale, are reviewed in this chapter with emphasis on the interpretation and the properties of these coefficients.
A note on the kappa statistic for clustered dichotomous data.
- MathematicsStatistics in medicine
- 2014
The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods and develop a new simple and efficient data generation algorithm.
A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale
- PsychologyApplied psychological measurement
- 2015
A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response to help clarify the role of expert opinions, personal judgments, or historical data in agreement analysis.
Supplemental Material for Interrater Reliability for Multilevel Data: A Generalizability Theory Approach
- Psychology
- 2021
Current interrater reliability (IRR) coefficients ignore the nested structure of multilevel observational data, resulting in biased estimates of both subjectand cluster-level IRR. We used…
Measuring intrarater association between correlated ordinal ratings.
- PsychologyBiometrical journal. Biometrische Zeitschrift
- 2020
A novel paired kappa is proposed to provide a summary measure of association between many rater' paired ordinal assessments of patients' test results before versus after rater training that provides an overall evaluation of the association among multiple raters' scores from two time points and is robust to the underlying disease prevalence.
Kappa statistic for clustered matched-pair data.
- MathematicsStatistics in medicine
- 2014
The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50).
Kappa coefficients for dichotomous-nominal classifications
- MathematicsAdv. Data Anal. Classif.
- 2021
It turns out that the values of the new kappa coefficients can be strictly ordered in precisely two ways, suggesting that the new coefficients are measuring the same thing, but to a different extent.
References
SHOWING 1-10 OF 64 REFERENCES
Assessing rater agreement using marginal association models.
- PsychologyStatistics in medicine
- 2002
Methodology for the simultaneous modelling of univariate marginal responses and bivariate marginal associations is presented and estimated scores within a generalized log non-linear model for bivariate associations facilitate the assessment of category distinguishability.
Modeling kappa for measuring dependent categorical agreement data.
- MathematicsBiostatistics
- 2000
A generalized estimating equation approach is developed with two sets of equations that models the marginal distribution of categorical ratings and the pairwise association of ratings with the kappa coefficient (kappa) as a metric.
An Estimating Equations Approach for Modelling Kappa
- Environmental Science
- 2000
Agreement between raters for binary outcome data is typically assessed using the kappa coefficient. There has been considerable recent work extending logistic regression to provide summary estimates…
Measurement of interrater agreement with adjustment for covariates.
- MathematicsBiometrics
- 1996
The kappa coefficient measures chance-corrected agreement between two observers in the dichotomous classification of subjects and assumes both raters have the same marginal probability of classification, but this probability may depend on one or more covariates.
Indexing systematic rater agreement with a latent-class model.
- PsychologyPsychological methods
- 2002
A latent-class model of rater agreement is presented for which 1 of the model parameters can be interpreted as the proportion of systematic agreement. The latent classes of the model emerge from the…
Assessing interrater agreement from dependent data.
- PsychologyBiometrics
- 1997
This work investigates the use of a latent model proposed by Qu, Piedmonte, and Medendorp (1995) to estimate the correlation between raters for each method, and test for their equality.
Modelling patterns of agreement and disagreement
- PsychologyStatistical methods in medical research
- 1992
A survey of ways of statistically modelling patterns of observer agreement and disagreement is presented, with main emphasis on modelling inter-observer agreement for categorical responses, both for nominal and ordinal response scales.
An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers.
- MathematicsBiometrics
- 1977
A subset of 'observers who demonstrate a high level of interobserver agreement can be identified by using pairwise agreement statistics betweeni each observer and the internal majority standard opinion on each subject.
Estimating with a Latent Class Model the Reliability of Nominal Judgments Upon Which Two Raters Agree
- Psychology
- 2006
Because nominal-scale judgments cannot directly be aggregated into meaningful composites, the addition of a second rater is usually motivated by a desire to estimate the quality of a single rater's…






