Considering their study's objectives, I was surprised that Karen Chan and colleagues did not explain why they evaluated only 10% (27/266) of available randomized controlled trials.1 This is especially interesting as both previous studies they referenced2,3 evaluated more studies, 102 and 45 respectively, and therefore were more precise.
Further, why weren't the proportions in Table 3 accompanied by 95% confidence intervals, particularly when the reporting of confidence intervals was one of the criteria Chan and colleagues used to evaluate randomized controlled trials?
When one refers to Diem and Lentner's Scientific Tables,4 it is troubling to note the imprecision of the proportions reported by Chan and colleagues1 (e.g., 22/27 = 81%, confidence interval [CI] 62–94%; 20/27 = 74%, CI 54–89%; 18/20 = 90%, CI 68–99%; 2/18 = 11%, CI 1–35%; 13/18 = 72%, CI 47–90%; 17/27 = 63%, CI 42–81%; 11/27 = 41%, CI 22–61%; 10/20 = 50%, CI 27–73%; 15/20 = 75%, CI 51–91%). Apparently, the upper and lower limits of many of these confidence intervals could lead to differing conclusions. For example, although Chan and colleagues found that 74% of investigators (20/27) discussed the clinical significance of their findings,1 this estimate is also consistent with values as low as 54% and as high as 89%.
In closing, I would argue that the determination of study precision should be part of the planning process for all studies, not just randomized controlled trials. Such as step would strengthen both the statistical and clinical integrity of any planned study.