Interobserver and intraobserver variation in the assessment of antepartum cardiotocograms.

Abstract

Five experienced observers assessed 100 antepartum cardiotocograms from 100 pregnant women by means of three different systems: two commonly used scoring systems and a close look at a 1-minute recording period (window). Variation between observers (interobserver) and within one observer (intraobserver) was determined by calculating the weighted kappa coefficients. Weighted kappa coefficients for interobserver variation in the Visser/Huisjes score and the Fischer score were 0.41 and 0.37, respectively. Weighted kappa coefficients for interobserver variation in a close look at a 1-minute window ranged from 0.09 to 0.69. Thus, a low level of agreement between observers was shown to exist for all three systems tested. The level of intraobserver agreement was shown to be much higher for virtually all three systems. Since the level of interobserver agreement is low, the results of different studies that describe visually assessed antepartum cardiotocograms by means of these scoring systems or 1-minute windows are not comparable, and reproducibility will be low.

Statistics

0200400600800'97'99'01'03'05'07'09'11'13'15'17
Citations per Year

743 Citations

Semantic Scholar estimates that this publication has 743 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Lotgering1982InterobserverAI, title={Interobserver and intraobserver variation in the assessment of antepartum cardiotocograms.}, author={Frederik K. Lotgering and Henk C. S. Wallenburg and Henrike J. Schouten}, journal={American journal of obstetrics and gynecology}, year={1982}, volume={144 6}, pages={701-5} }