Skip to search formSkip to main contentSkip to account menu

Inter-rater Reliability

Known as: Interrater Reliability 
The extent to which two different researchers obtain the same result when using the same instrument to measure a concept.(EPA Glossary)
National Institutes of Health

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Review
2013
Review
2013
Abstract The alignment among standards, assessments, and teachers’ instruction is an essential element of standards-based… 
Review
2003
Review
2003
Although a great deal of data has been published in the past 20 years supporting the interrater reliability of the Rorschach… 
2000
2000
Generalizability theory was used to assess the reliability of the Dartmouth Assertive Community Treatment Scale (DACTS), which… 
1997
1997
We developed a measurement scale for assessment of impairment in MS patients (MSIS) in accordance with the recommandations of WHO… 
1997
1997
A widely accepted approach to evaluate interrater reliability for categorical responses involves the rating of n subjects by at… 
1997
1997
OBJECTIVE To measure agreement among experienced clinicians regarding the interpretation of physical findings in child sexual… 
1996
1996
The authors investigated the reliability and validity of the Scale of Functioning (SOF), a 15-item scale, in 78 middle-aged and… 
1995
1995
OBJECTIVE The authors evaluated the interrater reliability of ratings of bizarre delusions, addressing limitations of previous… 
Highly Cited
1990
Highly Cited
1990
Although the process of rater training is important for establishing interrater reliability of observational instruments, there…