A large multi-center trial (n=511) led by Roberta Ballard has just been published. (Ballard RA, et al. Randomized Trial of Late Surfactant Treatment in Ventilated Preterm Infants Receiving Inhaled Nitric Oxide. J Pediatr 2015.)
In this trial infants had similar enrollment characteristics to the NOCLD trial; babies were between 500 and 125o grams birthweight and less than 32 weeks gestation. They had to be receiving assisted ventilation. There were the following differences to the previous study I alluded to: in NOCLD infants were 7 to 21 days, in this trial 7 to 14 days; in NOCLD the smallest infants could be enrolled if they were on CPAP; not in this trial, everyone had to be intubated and ventilated.
The idea was, that infants with persistent respiratory distress after a week of age have evidence of surfactant dysfunction, so perhaps if we gave them more functional surfactant they would be able to overcome this, and then have reduced lung function abnormalities, would be able to breathe more efficiently and would end up with less lung injury. There are a couple of pilot studies showing short term improvements in pulmonary function and gas exchange in very preterm infants who were still getting respiratory support at a week of age, and who received surfactant. So the investigators thought that a big RCT to examine clinically relevant outcomes was warranted.
Which I think is fine. This was a reasonable question to ask, and a reasonable, clinically important, outcome to investigate (especially with a local treatment very unlikely to have systemic adverse effects). Given the previous data on inhaled NO in a very similar group of babies (in whom a secondary analysis suggested that the earlier part of the postnatal age group, i,e, 7 to 14 days, was more effective) you can’t fault the investigators for using iNO in all the babies. Even if the, as yet still unpublished, NewNO trial did not show a benefit.
All the babies were getting inhaled NO, according to the NOCLD protocol. The surfactant was randomly given to half of the babies.
But I can’t tell you how much surfactant was given, or with what frequency. A major problem with this report of the study is that I can’t figure out exactly what was the intervention. Which is a big problem. The investigators went to great (and probably unnecessary) lengths to mask the procedure, with a separate team, not otherwise involved in clinical care, who gave the surfactant (or didn’t) into the ETT behind screens. But they don’t actually say what dose was given.
Babies in the study got a dose of surfactant (or a sham procedure), as I said, the study report doesn’t even say how much they got (it was “standard clinical doses”) or how often they got it (it was every 24 to 72 hours if they remained intubated, starting at 48 hours after the first dose with a maximum of 5 doses; but 24 to 72 is a huge range…), they don’t say what were the criteria for retreatment, or for not retreating and extubating etc. There are several guidelines presented for steroids, for re-intubation etc, but not for surfactant/sham administration.
Table 3 of the results does show that about 80% in each group got 5 doses (of either surfactant or standing behind a curtain).
There was no benefit shown. Nothing, not even a whisper of a hint of a benefit. Which is disappointing, but at least seems at first look rather definitive. Or at least it would be definitive if we knew what the intervention group had received.
Even though surfactant dysfunction is a real problem in these babies, giving them additional Infasurf, according to this uncertain schedule, isn’t sufficiently effective to improve their outcomes.
This does, I think, help to improve care, (as there is no longer any stimulus to give surfactant to babies at this age) but it would have been much more useful, after what is probably several million dollars of investment, to know exactly what was done.
All we know is that, giving some dose of surfactant (Infasurf) and giving, mostly, 5 of those doses, didn’t reduce BPD or death with a certain degree of confidence (see below).
To return to a comment I made above, why did I say that the sham procedure was unnecessary? Masking the intervention has become an essential feature of neonatal (and much clinical) research in order to get good funding; however, there is actually little empirical evidence that blinding/masking the intervention makes much difference to the size or direction of the effect of an intervention, particularly if objective outcomes are being studied. Diagnosis of BPD, if the ‘physiologic” definition is being used, is relatively objective, and is unlikely to be influenced by knowledge of an intervention performed several weeks earlier.
I think, if it is relatively easy, and relatively inexpensive, to mask an intervention (such as an orally administered drug, for example) then go for it, there is often no good reason to not do so. But having an on-call surfactant or sham administration team, who will go through the ritual of masking used in this study, will have enormously increased the cost. They could have studied twice as many babies (I guess) for the same cost, and have a much better estimate of the size of the effect, or of the confidence with which we can eliminate a benefit or risk. Many of the original surfactant trials for treatment of HMD were masked, in a similar fashion, but not all. There is no clear difference in the estimates of efficacy between those that were masked and those that were not.
Which brings me neatly to the final comment, the study was stopped by the DSMB, because “based on a determination that the study treatment is very unlikely to demonstrate efficacy” they didn’t think they should continue. They actually made this determination when they had the outcome data of 301 infants. There is a lot of debate about stopping trials early for futility, one paper in Critical Care (freely available on-line) is actually a real debate. But I am a bit mystified in this case, when the decision to stop the trial was taken they had actually randomized 511, of the planned 524 babies. One of the justifications for early stopping for futility is that it saves wasting money. That clearly isn’t an issue in this case. But even with all the data from 511 babies available there is still major uncertainty about whether this intervention is actually futile; the 95% CI for death or BPD include a 25% increase or decrease in that outcome. Which is huge, and clinically important. A 25% reduction (or increase) in death or BPD is something I would be interested in.
When the DSMB recommended stopping the trial they only had data from 300 babies, which means the confidence for saying there is no benefit (or harm) was extremely lacking in, er, confidence. Depending on how you calculate it, (assuming that the groups both had a 40% incidence of death or BPD at that point) when they stopped the trial showed that the likely real difference in that outcome was between about a 35% increase or decrease in risk of death or BPD. The sample size for the study was based on a hypothesized 13% change in the incidence of “death or BPD”, so why would the trial be stopped early when the confidence intervals included the hypothesized difference?