Sorry to any of my readers who are offended by the ‘olde englishe’ term for solid waste matter, I could have said s**t or crap I guess, but I am getting a bit irritated by the stream of b******s (liquid waste matter from a flying mammal) coming from the source cited.
The campaign by Drs Carome and Wolfe of Public Citizen in the USA against the SUPPORT trial continues. They again show they know nothing about clinical research or about neonatology or about the difference between an arse and an elbow (a second olde englishe expression). They have tried to rebut John Lantos’s refutation of their nonsense, and again show that they seem to think that the babies in the trial were exposed to some risks that the investigators tried to keep hidden.
They again suggest (and are joined by another commenter) that a 3rd group receiving ‘standard of care’ should have been enrolled. They are unable to understand that the 2 groups were both already ‘standard of care’.
I added my two penn’orth (another olde englishe expression meaning worth 2 (old) pennies) to the Bioethics Forum. I was a bit rude, so it might get edited, if so, I will repost the unedited version here. Otherwise, go to the link and, if you feel so inclined you can register and add your own comments.
I tried to find a place on the Public Citizen website to leave a comment, but there is nowhere to do so. Which is a little internet version 1.0, but also, for an organization that purports to be ‘the people’s voice’ is an indication that they don’t want to hear the people’s voice.
All of which is quite a shame, the politics of Public Citizen, other than this completely unjustified attack on SUPPORT, are very much in line with my own. They should just shut up and apologize for all this nonsense, but I am pretty sure that will not happen. It is hard to just say, ‘sorry, we were wrong, it was a great study that will improve outcomes for preterm babies, without increasing any kind of risks for the participants, carry on the good work’.
Sir, I am just curious where “standard of care” involves randomizing babies to specific Sp02 zones (85-89% or 91-95%) with the use of blinded oximeters that are reading 3 points above or below real so that clinicians cannot know the actual level of Sp02? Thanks.
You are confusing two issues. One is whether the saturation ranges were both within acceptable current standard of care. The answer to that is clearly yes, there is extensive documentation available, dating from the time the study was planned and performed, which shows that.
The second issue is how to compare two standards of care in a masked fashion to avoid co-interventions and other biases that might invalidate a study. This is the same issue that is addressed by all masked studies. The most reliable way to perform a study is that no-one knows until the end of the study which group a particular patient was in.
The masked oximeters were used to randomly compare in a masked fashion 2 saturation ranges that were within standard of care.
Of course you don’t randomize patients in normal practice, the randomization was to compare those two ranges of saturation that were both in wide use.
The question that can arise, and this is the case in any blinded study, ‘what happens if the physician (or the patient) needs to know which group the patient is in?’ In a masked drug study, in comparison with placebo for example, that can be done by breaking the code. In the SUPPORT trial that wasn’t even necessary, the physician could just take off the blinded oximeter and use a standard device.
That actually happened to a small number of babies in SUPPORT. It is an ethical imperative that we all follow, that if a study procedure appears to be harmful to a baby, it is stopped. Immediately. In fact it is almost never necessary to know the exact saturation of a baby, but to know what therapy is required to keep it in the desired range. In the years that SUPPORT was being done, the range that most neonatologists wanted to keep babies in was between 85 and 95%. Knowing that a baby had a saturation of 87% rather than 93% was considered to be probably unimportant. Which was why the study was done. It is only because we did the study that we now have some understanding of the differential effects of being in one of those 2 ranges.