Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors

@article{Haahr2006WhoIB,
  title={Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors},
  author={Mette T. Haahr and Asbj{\o}rn Hr{\~o}bjartsson},
  journal={Clinical Trials},
  year={2006},
  volume={3},
  pages={360 - 365}
}
Background Insufficient blinding of persons involved in randomized clinical trials is associated with bias. The appraisal of the risk of bias is difficult without adequate information in trial reports. Purpose We wanted to study how blinding is reported in clinical trials and how lack of reporting relate to lack of blinding. Methods A cohort study of 200 blinded randomized clinical trials published in 2001 randomly sampled from the Cochrane Central Register of Controlled Trials, and a… 

Tables from this paper

Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding.

How often randomized clinical trials test the success of blinding, the methods involved and how often blinding is reported as being successful are assessed are assessed.

Blinded Outcome Assessment Was Infrequently Used and Poorly Reported in Open Trials

Blinding of outcome assessors is infrequently used and poorly reported, and increased use of independent assessors could increase the frequency of blinded assessment.

Blinding in Randomized Clinical Trials: Imposed

Important methodological aspects of blinding are reviewed, emphasizing terminology, reporting, bias mechanisms, empirical evidence, and the risk of unblinding.

Blinding in randomised clinical trials of psychological interventions: a retrospective study of published trial reports

Objectives To study the extent of blinding in randomised clinical trials of psychological interventions and the interpretative considerations if randomised clinical trials are not blinded. Design

Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study

No evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors, and this results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision.

Definitions of blinding in randomised controlled trials of interventions published in high-impact anaesthesiology journals: a methodological study and survey of authors

Reporting of the blinding status of key individuals involved in analysed anaesthesiology RCTs was insufficient and peer reviewers and editors should insist on clear information on who was blinded in a trial instead of using the term ‘double-blind’ for different blinding practices.

Blinding terminology used in reports of randomized controlled trials involving dogs and cats.

Blinding was commonly used as a means of reducing bias associated with collection and interpretation of data in reports of veterinary RCTs, but most reports of blinding methodology were incomplete and there was no consistency in how blinding terminology was used by authors or interpreted by veterinarians.

Blinding of study statisticians in clinical trials: a qualitative study in UK clinical trials units

A proportionate risk assessment approach would enable CTUs to identify risks associated with unblinded statisticians conducting the final analysis and alternative mitigation strategies and to design guidance and a tool to support this risk assessment process.
...

References

SHOWING 1-10 OF 10 REFERENCES

In the dark: the reporting of blinding status in randomized controlled trials.

Discrepancy between published report and actual conduct of randomized clinical trials.

Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials.

It is suggested that both physicians and textbooks vary greatly in their interpretations and definitions of single, double, and triple blinding.

Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles.

The reporting of trial outcomes is not only frequently incomplete but also biased and inconsistent with protocols and Published articles, as well as reviews that incorporate them, may therefore be unreliable and overestimate the benefits of an intervention.

Assessing the quality of controlled clinical trials

The concept of study quality and the methods used to assess quality are discussed and the methodology for both the assessment of quality and its incorporation into systematic reviews and meta-analysis is discussed.

The impact of blinding on the results of a randomized, placebo‐controlled multiple sclerosis clinical trial

There were no significant differences in the time to treatment failure or in the proportions of patients improved, stable, or worse between the group II and group III patients who correctly guessed their treatment assignments and those who did not.

Response rates to mail surveys published in medical journals.

The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials

The revised CONSORT statement is intended to improve the reporting of an RCT, enabling readers to understand a trial's conduct and to assess the validity of its results.