Assessing Reliability : Critical Corrections for a Critical Examination of the Rorschach Comprehensive System

@inproceedings{Meyer2001AssessingR,
  title={Assessing Reliability : Critical Corrections for a Critical Examination of the Rorschach Comprehensive System},
  author={Gregory J. Meyer},
  year={2001}
}
Wood, Nezworski, and Stejskal (1996a, 1996b) argued that the Rorschach Comprehensive System (CS) lacked many essential pieces of reliability data and that the available evidence indicated that scoring reliability may be little better than chance. Contrary to their assertions, the author suggests why rater agreement should focus on responses rather than summary scores, how field reliability moves away from testing CS scoring principles, and how no psychometric distinction exists between a… 

Thinking Clearly About Reliability: More Critical Corrections Regarding the Rorschach Comprehensive System

  • J.
  • Psychology
  • 2001
In this brief comment on J. M. Wood, M. T. Nezworski, and W. J. Stejskal's (1997) response to his article (Meyer, 1997a), the author documents how J. M. Wood et al. continue to make allegations based

On the Science of Rorschach Research

  • G. Meyer
  • Psychology
    Journal of personality assessment
  • 2000
TLDR
I describe problems in an article by Wood, Nezworski, Stejskal, Garven, and West (1999b) that did not provide sufficient guidance on sound criticism of Rorschach research.

Weighing Evidence for the Rorschach's Validity: A Response to Wood et al. (1999)

  • R. Ganellen
  • Psychology
    Journal of personality assessment
  • 2001
TLDR
A careful examination of existing studies indicates that no compelling empirical evidence exists indicating that Ganellen's conclusions should be modified at the present time, although no firm conclusions about the DEPI can be reached until further evidence accumulates.

Simple Procedures to Estimate Chance Agreement and Kappa for the Interrater Reliability of Response Segments Using the Rorschach Comprehensive System

When determining interrater reliability for scoring the Rorschach Comprehensive System (Exner, 1993), researchers often report coding agreement for response segments (i.e., Location, Developmental

An Examination of Interrater Reliability for Scoring the Rorschach Comprehensive System in Eight Data Sets

TLDR
Reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyer's (1997a) procedures for estimating response segment reliability were accurate.

The Interclinician Reliability of Rorschach Interpretation in Four Data Sets

TLDR
Compared to meta-analyses of interrater reliability in psychology and medicine, the findings indicate these clinicians could reliably interpret Rorschach CS data.

Interobserver Agreement, Intraobserver Reliability, and the Rorschach Comprehensive System

TLDR
Reliability was analyzed at multiple levels of Comprehensive System data, including response-level individual codes and coding decisions and ratios, percentages, and derivations from the Structural Summary.

Rorschach Performance Assessment System (R-PAS) Interrater Reliability in a Brazilian Adolescent Sample and Comparisons With Three Other Studies

TLDR
Examination of interrater reliability for scoring the Rorschach Performance Assessment System (R-PAS) in a sample of 89 adolescents using exact agreement intraclass correlations coefficient (ICCs) showed that the ICCs for most variables had low variability across studies, suggesting clear coding guidelines.

A Comparative Meta-Analysis of Rorschach and MMPI Validity

Two previous meta-analyses concluded that average validity coefficients for the Rorschach and the MMPI have similar magnitudes (L. Atkinson, 1986; K. C. H. Parker, R. K. Hanson, & J. Hunsley, 1988),

Advancing the science of psychological assessment: the Rorschach Inkblot Method as exemplar.

  • I. Weiner
  • Psychology
    Psychological assessment
  • 2001
TLDR
This article comments on a series of 5 articles, concerning the utility of the Rorschach Inkblot Method, indicating that the RIM and the Minnesota Multiphasic Personality Inventory have almost identical validity effect sizes, both large enough to warrant confidence in using these measures.

References

SHOWING 1-10 OF 52 REFERENCES

A Comment on “The Comprehensive System for the Rorschach: A Critical Examination”

Wood, Nczworski, and Stejskal are correct in noting that the Comprehensive System has been scrutinized less carefully than might have been expected or desired A few critiques have addressed isolated

Construct validity of the Rorschach Oral Dependency Scale: 1967–1995.

A review of research examining the construct validity of J. M. Masling, L. Rabie, and S. H. Blondheim's (1967) Rorschach Oral Dependency (ROD) scale as a measure of interpersonal dependency revealed

The ability of the Rorschach to predict subsequent outcome : A meta-analysis of the Rorschach Prognostic Rating Scale

To evaluate the ability of the Rorschach to predict subsequent outcome, the journal literature on the Rorschach Prognostic Rating Scale (RPRS) was reviewed and a meta-analysis was conducted on 20

Problems With Brief Rorschach Protocols

Retest reliability coefficients were calculated for 72 pairs of Rorschach records. In the target group of 36 pairs, one of the tests contained fewer than 14 responses, and the second record in the

Construct validation of scales derived from the Rorschach method: a review of issues and introduction to the Rorschach rating scale.

  • G. Meyer
  • Psychology
    Journal of personality assessment
  • 1996
TLDR
The Rorschach Rating Scale (RRS) is presented as a criterion tool to be used with either of the two approaches to validation, a method for improving expert clinical judgment or for aggregating data across diverse judges.

The Comprehensive System for the Rorschach: A Critical Examination

The Comprehensive System (Exner, 1993) is widely accepted as a reliable and valid approach to Rorschach interpretation However, the present article calls attention to significant problems with the

Coefficient Kappa: Some Uses, Misuses, and Alternatives

This paper considers some appropriate and inappropriate uses of coefficient kappa and alternative kappa-like statistics. Discussion is restricted to the descriptive characteristics of these

A meta-analysis of the reliability and validity of the Rorschach.

  • K. Parker
  • Psychology
    Journal of personality assessment
  • 1983
The results of a meta-analysis of Rorschach studies indicate that reliabilities in the order of .83 and higher and validity coefficients of .45 or .50 and higher can be expected for the

On the integration of personality assessment methods: the Rorschach and MMPI.

  • G. Meyer
  • Psychology
    Journal of personality assessment
  • 1997
TLDR
5 ideas led to 5 hypotheses, each of which received support, which were discussed for clinical practice and research and correlated with genuine clinical phenomena.

Measurement and reliability: statistical thinking considerations.

  • J. Bartko
  • Psychology
    Schizophrenia bulletin
  • 1991
TLDR
There is increasing awareness among researchers that the two most appropriate measures of reliability are the intraclass correlation coefficient and kappa, however, unacceptable statistical Measures of reliability such as chi-square, percent agreement, product moment correlation, as well as any measure of association and Yule's Y still appear in the literature.
...