To blindly go where no man….

Do we really need to blind research projects? And if so which parts of them? There is good empirical data that randomization must be masked, it is important that at the moment a patient is enrolled in a study the investigators don’t know which group they are going to be in. That way we can avoid many selection biases.

There is much less good empirical data about the effects of masking the intervention, although it is very likely that when it is possible (and it is not always feasible) masking is a good way to reduce many types of bias. Unfortunately there are sometimes reports of trials as being ‘double-blind’ where it is not always clear what that means.

One thing that can almost always be blinded, even if the intervention can not be blinded, is the evaluation of the outcomes. Measurement of development on standardized scales, for example, can often be performed by blinded individuals, even when the intervention (for example, giving a blood transfusion) can not reasonably be blinded.

A new article in the CMAJ (Hrobjartsson A, Thomsen AS, Emanuelsson F, Tendal B, Hilden J, Boutron I, Ravaud P, Brorson S: Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors. CMAJ 2013, 185(4):E201-211.) compared the outcomes from RCTs where there was both a blinded and an unblinded assessment of the same outcome. There are not many such studies, as you might imagine, but they were able to find 16. They found that, although some studies found no difference, others found substantial effects. and overall there was an exaggerated estimate of the treatment effect by 68%. This is a follow on from another article from the same group that looked at how blinding of dichotomous outcomes are affected (link here) which showed that on average Odds Rations were exaggerated by 36%.

I was once designing a trial of blood transfusion, and I was trying to think of how to blind it, in the end, the idea of having blood hanging by the babies bedside attached to a fake IV pump that didn’t actually work, but still plugged into their IV was abandoned. Quite rightly I think. But we were still able to mask the evaluation of the objective outcome.

A nice introduction to some of the major issues with evaluation of research results is the paper ‘5 ways statistics can fool you—Tips for practicing clinicians‘ by West and Dupras, Vaccine march 2013. Although the discussion is slanted toward studies of vaccines, the ‘tips’ apply everywhere,

(i) consider clinical and statistical significance separately,

(ii) evaluate absolute risks rather than relative risks,

(iii) examine confidence intervals rather than p values,

(iv) use caution when considering isolated significant p values in the setting of multiple testing, and

(v) keep in mind that statistically non-significant results may not exclude clinically important benefits or harms.

Rules for life!

About keithbarrington

I am a neonatologist and clinical researcher at Sainte Justine University Health Center in Montréal
This entry was posted in Neonatal Research and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s