Quality of life cannot be predicted from a brain scan

…either ultrasound or MRI, or by EEG, or neurological examination, or even during follow-up by screening for disabilities.

That title is from a recently published editorial (Fayed N, et al. Quality of life cannot be predicted from a brain scan. Dev Med Child Neurol. 2020;62(4):412) which is available full-text open access, and which includes this pearl:

Even though levels of cognitive and motor problems can often be  based on magnetic resonance imaging results, abnormal electroencephalogram findings, and a neonate’s hospital course, the happiness and acceptance a child will achieve in their families and communities cannot.

I actually would argue that none of those 3 methods can be used to identify cognitive or motor problems with any reliable degree of certainty. The PPV of disabling cerebral palsy, for example, based on white matter injury shown on the MRI, is LESS THAN 50%.

Even if pre-discharge imagery were perfectly predictive of impairments, which is far from being the case, being impaired does not imply a poor quality of life. There is very little correlation between a life of quality and whether or not an individual is impaired. As these authors note:

disability severity has little relationship to life quality. Instead, emotional well-being, peer interactions, parental adaptation, and community support are much more powerful predictors of whether a child is likely to grow up to have a good life. When conveying a prognosis of severe disability and its consequences to child and family, the solution is a simple one. Refrain from confounding the concept of a good QoL with the prognosis of cognitive or physical disability.

We perform many investigations to try and predict the outcomes of our patients, sometimes with the idea that we should change the intensity of our care based on the results.

When you state the issue as clearly as these authors did in the title of their article it becomes almost self-evident; of course you cannot predict quality of life by looking at the brain. And if you cannot, then why are we doing so many scans?

Posted in Neonatal Research | Tagged , , , | 1 Comment

What happened to the HeROs?

I had to find a way of changing HeRO to Heroes as an excuse for posting a link to this video

But also the results of a long term follow up of the HeRO trial have been published. The original trial (Moorman JR, et al. Mortality reduction by heart rate characteristic monitoring in very low birth weight neonates: a randomized trial. J Pediatr. 2011;159(6):900-6 e1) was in babies of less than 1500 g. That trial found, of course, that babies who had their heart rate characteristics index displayed to the caregivers had a lower mortality than babies on the same monitors for whom the index was hidden. Further analysis of the data from that trial showed that mortality was lower only among those infants who actually had late-onset sepsis, and specifically within 30 days of a sepsis episode. Presumably, this is because sepsis episodes were detected sooner, and appropriate therapy started earlier. The improved survival after sepsis is illustrated in this figure

Organism-specific mortality based on heart rate characteristics (HRC) monitor display… Survival was higher in each organism group in infants with HRC displayed (solid line) compared with those with HRC not displayed (dashed line).

If that explanation of the results is true, you might also hope to find a reduction in long term adverse outcomes also. This new publication Schelonka RL, et al. Mortality and Neurodevelopmental Outcomes in the Heart Rate Characteristics Monitoring Randomized Controlled Trial. J Pediatr. 2020 investigated the developmental progress and neurological signs of a subgroup of the survivors, that is those with a birth weight under 1000g and born in one of the 3 hospitals who contributed the most to the enrolment. They were also centres with established expert follow-up. I want to repeat a comment I made on another recent post, the last of these babies was enrolled in May 2010, and therefore completed their Bayley version 3s and neuro exam at 18 to 22 months corrected age at the latest by July 2012. Why 7 years to publish these important data?

Survival in this subgroup of 638 infants was higher in the group with the HeRO score displayed, 76%, compared to 68% with the monitors hidden, relative risk of death 0.75 (95% compatibility limits 0.59-0.97).

Among surviving infants, the developmental and neurological evaluation showed the following:

Neurological abnormality or developmental delay, survivors only
Displayed Hidden RR (95% CI)
Overall proportion with at least one abnormality 48/242 (19.8) 37/206 (17.9) 1.10 (0.75-1.63)
 GMFCS level 2-5 (moderate/severe CP) 23/246 (9.4) 13/210 (6.2) 1.51 (0.78-2.9)
 Bilateral blindness 4/247 (1.6) 0/210 (0) 0 (0-0)
 Deafness 11/248 (4.4) 1/210 (0.5) 9.31 (1.21-71.55)
 Bayley cognitive <70 23/243 (9.5) 15/207 (7.3) 1.31 (0.70-2.43)
 Bayley language <70 36/241 (14.9) 28/206 (13.6) 1.01 (0.69-1.74)

As you can see there are not many differences between the 2 groups, and the small differences are all in favour of the control group. The exception being deafness which was surprisingly more frequent in the monitor displayed group.

Because there were more survivors in the monitored group you can express the data, if you wish, as ‘death or severe CP’ and ‘death or blindness’ and ‘death or a lowish cognitive score on the Bayley’, those results are highlighted in the abstract, but any regular readers of this blog will know my opinion about such composite outcomes. I think without trying to massage the data to find an outcome which is “statistically significantly” improved in the monitoring group, we can be reassured that there were more survivors in the monitored group and they had very similar outcomes to the controls. The authors of this study have done a lot of great work, on this project and many others, and I have a great deal of admiration for them, but I don’t understand why torturing the data to find a combination of outcomes that has a p-value less than 0.05 in favour of the HeRO system was thought to be so important. Improved survival with very similar long term outcomes is surely enough evidence on which to base decisions about an intervention, and in this case show that HeRO is the way to go.

As I also mentioned recently, I don’t think there is any intervention in neonatology that has increased survival and also worsened long term outcomes, and, most importantly, no intervention that increases survival but only of babies with a future quality of life that is worse than being dead. Surveillance of long-term outcomes in a trial such as the HeRO trial, and timely publication, is important for quality assurance and to ensure that we optimize interventions, and continue the enormous progress we have made in neonatology.

Before anyone comments that SUPPORT showed increased survival with worse retinopathy in the higher saturation group, that is true, but blindness was not different between groups, and the NeoProm group showed no adverse impact of higher saturations on any long term outcome despite better survival in the high saturation group.

Posted in Neonatal Research | Tagged , , | Leave a comment

CRP can suggest that babies are not infected, when you already know!

I wrote a blog post about 3 years ago about a study examining procalcitonin use in neonatal early-onset sepsis. You can see from my post that the authors didn’t, to my mind, show any utility of procalcitonin (PCT) either alone or in addition to the CRP for diagnosis of EOS. They have just published a secondary analysis of the trial (Stocker M, et al. C-Reactive Protein, Procalcitonin, and White Blood Count to Rule Out Neonatal Early-onset Sepsis Within 36 Hours: A Secondary Analysis of the Neonatal Procalcitonin Intervention Study. Clin Infect Dis. 2020) which shows the following:

Normal serial CRP and PCT measurements within 36 hours after the start of empiric antibiotic therapy can exclude the presence of neonatal EOS with a high probability. The negative predictive values of CRP and PCT do not increase after 36 hours

Which is all well and good, but not much use. Blood cultures are almost always positive by 36 hours, so by the time the PCT and CRP are useful you already know if the baby has sepsis or not! The actual time to positive cultures has just been reviewed, (Marks L, et al. Time to positive blood culture in early-onset neonatal sepsis: A retrospective clinical study and review of the literature. J Paediatr Child Health. 2020;56(9):1371-5). Using the Bactec system they found that 98% of positive blood cultures in babies with EOS were positive at less than 24 hours, and the only one that was positive later was taken after antibiotics had been started. In their review of the literature, blood cultures for EOS were positive by 24 hours in 92% to 100%. In my practice, we now stop antibiotics if cultures are negative at 36 hours, the idea being that in the rare case of a culture being positive between 36 and 48 hours we can restart the antibiotics without actually missing a dose, but the dose which would normally have been given at 48 hours is avoided if the cultures are negative. Given this new publication, we can probably stop even earlier, at least for EOS, and limit antibiotic courses to one or two doses for the majority of babies who are screened but do not have EOS.

The Bactec system and other similar systems are extremely sensitive to even very low bacterial counts as long as 1 mL of blood is used, they screen the culture medium continuously and an alarm bell rings in the lab if they become positive, bringing a laboratory technician scurrying over to get the result and phone it to the NICU. I actually don’t know how it all works, but that is the image I have in my mind. We have a very efficient lab that always telephones when a blood culture is positive, but just as a backup we ensure that someone checks with the laboratory directly before stopping antibiotics. Reducing unnecessary antibiotic use is an important goal, this most recent publication again fails to show that CRP or procalctinon measurements, single or repeated, assist in achieving that goal.

Posted in Neonatal Research | Tagged | 2 Comments

Breast milk fortifiers, a new systematic review

A systematic review has just been published which compares the outcomes of milk fortification with bovine-milk derived fortifier and human-milk derived fortifier. (Grace E, et al. Safety and efficacy of human milk-based fortifier in enterally fed preterm and/or low birthweight infants: a systematic review and meta-analysis. Archives of Disease in Childhood – Fetal and Neonatal Edition. 2020:fetalneonatal-2020-319406)

The main conclusion is that the evidence is very weak, but I think that even that exaggerates the quality of the evidence! The extensive literature search, using terms designed to select randomized trials in which one group received bovine-milk derived fortifier (BMDF) and the other received human-milk based fortifier (HMDF), led to the inclusion of two trials with a total of only 332 infants. Unfortunately, those 2 trials studied different interventions, and in my mind should not have been meta-analysed.

We already know from published data, that using artificial formula, rather than pasteurized donor human breast milk increases Necrotising Enterocolitis. That is so whether the formula was used as a supplement to insufficient maternal breast milk, or as an alternative for babies not receiving maternal milk. Here is the relevant figure from the Cochrane review (Quigley M, et al. Formula versus donor breast milk for feeding preterm or low birth weight infants. Cochrane Database Syst Rev. 2019;7:Cd002971) for the outcome NEC.

(I have mentioned before that if you want to access the Neonatal Cochrane Reviews full text free of charge you can do so via this Vermont Oxford Network page; if you find the review you are interested in and click on the link then the Cochrane library page somehow knows that Vermont sent you, and VON support universal access to the neonatal reviews.)

So given that we already know this with a moderate degree of certainty, any study which tries to determine the importance of the type of fortifier on NEC, or other outcomes, should compare only the fortifier, and ensure that the milk received was human milk (maternal or donor).

But one of the 2 trials included in the new SR was Sullivan S, et al. An Exclusively Human Milk-Based Diet Is Associated with a Lower Rate of Necrotizing Enterocolitis than a Diet of Human Milk and Bovine Milk-Based Products. The Journal of Pediatrics. 2010;156(4):562-7.e1. In that trial, there were 3 groups, one of which received artificial fortifier as the supplement to breast milk and BMDF, the two groups who received HMDF also received donor human milk as the supplement to mother’s own milk. So it was not a trial of human-milk derived fortifier alone, but a trial of HMDF and donor breast milk supplements, compared to BMDF and artificial formula supplements.

In fact, if you work in a centre that has access to pasteurized donor human milk it is unethical to randomize infants to receive artificial formula as a supplement.

The only justification for giving a preterm baby at risk for NEC artificial formula, rather than donor human milk, if it is available, is parental refusal. And even then, if a parent refused for a baby at very high risk, I think it is questionable whether such a refusal should be accepted.

The other trial (O’Connor DL, et al. Nutrient enrichment of human milk with human and bovine milk-based fortifiers for infants born weighing <1250 g: a randomized clinical trial. Am J Clin Nutr. 2018;108(1):108-16) was very different to Sullivan et al, in that study all the babies received breast milk, either maternal or donor, and were randomized to either HMDF or BMDF (in this case powdered BMDF). This study only included 127 babies, so didn’t have much power to show a difference in NEC, but in fact, the 2 groups had exactly the same number of cases of proven NEC.

The evidence has never shown that adding a powdered multi-component fortifier to mothers milk has an adverse impact on Necrotising Enterocolitis rates, and until recently the only fortifier available was BMDF. That doesn’t mean we have good evidence that they are definitely safe, the Cochrane systematic review shows that the studies that looked directly at the issue only had a rate of NEC of about 2.5%; there was no evidence of a higher risk with the fortifier, but they note that the evidence is weak. The relative risk of NEC comparing fortifier to no fortifier was 1.37 (95% limits 0.72 – 2.63), which means of course that there remains a risk that fortification might substantially increase NEC.

Why not just switch to HMDF anyway? The available HMDF is a liquid, and as such dilutes the mother’s own milk that is given to the baby, for example, the standard fortification of mothers milk requires that there is a dilution of 40 mL of mothers milk by 10 mL of the liquid fortifier.

The ways the various products are produced are quite different, for example, the pasteurization of Prolacta is done by a Vat method, which destroys some of the good proteins (such as lactoferrin) in the human milk. Local milk banks usually use Holder pasteurization which has much less effect on those proteins, and some banks use very short time, higher temperature pasteurization which is probably even better.

I also don’t think there is any good reason to believe that the increase in NEC seen with artificial formula is because of the source of the protein, it could well be other features of preparation, sterilization, manipulation etc. Preterm newborns extremely rarely have evidence of cows milk protein intolerance, in fact, foreign proteins usually induce tolerance when you give them to preterm infants. The focus on where the proteins in the milk come from may be entirely misleading. If we concentrated on why formula increases NEC compared to human milk we might gain some further insights into the pathophysiology of the disease.

Human-milk based fortifier is extremely expensive, but even if it cost the same as the BMDF I think there should be robust evidence before switching to using it. It will require diversion of a significant proportion of our currently available breast milk to create enough fortifier for every baby, and it will reduce the amount of the mother’s own breast milk that a baby receives, by dilution.

In summary, the only data that compares BMDF to HMDF in babies receiving maternal breast milk supplemented with donor milk when required (the current standard of care) does not show any difference in Necrotising Enterocolitis. Given the small sample size of that trial and the importance of NEC, I think that performance of a large multi-centre trial is urgent. It should be performed in infants at significant risk who are also receiving all evidence-based preventive methods, multicomponent probiotics and feeding protocols.

Posted in Neonatal Research | Tagged , , , , | 6 Comments

Evidence-Based Neonatology: more and more evidence, better and better care

For another project, which I will explain later, I have been trying to find recent large multicentre trials in very preterm babies. I searched PubMed for “Randomized Controlled Trial’, and “Multicentre study”, then filtered by Human and Newborn and Preterm and just looked at the last 10 years. So far here is the impressive list of Acronyms for the trials in approximately reverse chronological order: See how many you can recognize! There will be a prize for anyone who gets them all!

SAIL, ETTNEO, MOBYDICK, LIFT, PENUT, SIFT, PREMOD2, RAINBOW, CORD PILOT, STOP-BPD, PLANET-2, PROPREMS, Reduce-ROP, CPAP-wean, NEUROSIS, HUMID, NEWNO, APTS, N3RO, PHELBI, PIPS, rhEPO, NEON, PREMILOC, BOOST2 UK, BOOST2 Aus, COT, SafeBoosc2, TENS, COIN, SUPPORT, TIPP, CAP, PINT, ELFIN, ADEPT, STOP-ROP, ETROP, BEAT-ROP, CRYO-ROP, VON-DR, INIS

The prize I have in my mind is my admiration! There were one or two that I had to search hard to find the acronym, and there are a couple of trials which aren’t in the list because I could not find one at all (such as the budesonide/surfactant trial).

The reason I have been trying to list all the recent large trials is to see if I can develop a database of evidence-based treatments for the most immature babies that we treat, of 22, 23 and 24 weeks gestation. The physiological immaturity of these babies is so extreme that for some interventions it is likely that their response may be different.  I am searching to see which trials included such immature babies, and whether there are any data presented for infants under 25 weeks. So far… not much, but when I complete the search I will write it up as a publication, and let you all know on this blog.

If any readers know of a large RCT which included extremely immature babies and which is not on the list, please let me know, especially if data are presented for the gestational age stratum <25 weeks.

Posted in Neonatal Research | Tagged | 7 Comments

My new watch has a pulse oximeter?!

I recently bought a smartwatch, I won’t say which model because what I found is now available on several makes of watch. I discovered when playing around with the Apps that there was one which claimed to measure blood oxygen levels. After clicking “go” I received a measurement about 30 seconds later, which was probably inaccurate, I usually have a saturation of about 98% at sea-level, and it only read 92%.

Which made me wonder what is the purpose of this?

It also made me realize that I have been doing neonatology a long time; I was around when pulse oximeters were invented and I published one of the first studies to evaluate their use in the NICU (Barrington KJ, et al. Evaluation of pulse oximetry as a continuous monitoring technique in the neonatal intensive care unit. Crit Care Med. 1988;16(11):1147-53). I even studied using them in rabbits! (Barrington KJ, et al. Pulse oximetry during hemorrhagic hypotension and cardiopulmonary resuscitation. J Crit Care. 1986;1:241-6). I wondered if they would be useful during low perfusion states, and I thought that a pulse oximeter would be great during cardiac massage, as you would be able to tell if you were achieving pulsation at the site it was placed, and also what the saturation of the blood being delivered would be.

The low perfusion part of the rabbit study was interesting, as the oximeter remained accurate until there was very little perfusion, then it just stopped, which was an improvement in the previous technology of Transcutaneous PO2 monitoring, which become progressively inaccurate as perfusion falls.

But when performing CPR on the rabbits after inducing cardiac arrest I was initially very excited when it seemed to work well, and routinely gave a saturation of about 85% with the same frequency as the cardiac massage! Wow, publication in Nature on the way, I thought. Then I realized that when you do cardiac massage on an adult rabbit, the front legs, where I had placed the probe, move, a lot. You’ll probably all have to take my word for that unless you happen to have tried to resuscitate a rabbit. So I then stopped doing the massage and just rhythmically shook the rabbit’s paw; it continued to get a nice beeping sound and a saturation around 85%. I then put the probe on a piece of red rubber tubing that was lying around, and shook that rhythmically, and found the same thing.

That was my introduction to movement artefact in pulse oximetry. It also got me thinking about how oximeters function and details of their design (I’ll get back to the watch soon).

Pulse oximeters work by shining light of different wavelengths onto a tissue and measuring the relative absorption of the light at those 2 wavelengths. Clinical pulse oximeters do this with transmitted light, whereas my watch is obviously doing this with reflected light. Most clinical pulse oximeters use 2 wavelengths of near infra-red light which are on either side of an isobestic point. That is a point at which the absorption spectra of haemoglobin and oxyhaemoglobin cross. As long as you have 2 wavelengths with different relative absorptions by oxygenated and de-oxygenated blood it will work, but by using 2 wavelengths that have inverted relative absorptions you can make the calculations more accurate.

It was a Japanese engineer Takuo Aoyagi (who died earlier this year aged 84) who realized in the 1970s that the pulsations he was seeing in his signals were entirely from arterial blood, and so if he screened out the constant part of the signal, and only analyzed the pulsatile part of the signal, he could calculate the proportion of pulsatile haemoglobin that was oxygenated or de-oxygenated.

That is why movements will give you an apparent signal, because there are fluctuations in the light absorption, it also explains why the specific pulse oximeter I was using read 85%, because at the 2 wavelengths that the company used (which are all slightly different because of patent issues) when the absorption of light at the 2 wavelengths was 1:1 that corresponded to 85% of the haemoglobin being oxygenated and 15% being de-oxygenated.

When pulse oximeters were used under anaesthesia, movement artefact was not a big deal, but for continuous monitoring on moving patients, it required progressive improvements in technology to reduce the artefacts, which are still a problem for very active patients.

Also, it is worth remembering that it is the pulsatile part of the signal which is being analyzed, so if you have venous pulsation that will interfere with the result. I had a patient recently with severe pulmonary hypertension and the pre-ductal saturation was often  5-8% lower than the postductal, the patient had a closed ductus. On the echocardiogram, there was tricuspid regurgitation, which I think was causing venous pulsation and erroneously low oximeter readings in the upper limb, but wasn’t severe enough to be transmitted to the foot. In the past, we sometimes had an oximeter integrated into the monitor in one place, and a stand-alone monitor for the second site. Because the technologies differ between machines, sometimes you could change the gradient just by switching the probes around!

To get back to the watch, I am not really sure that this is a good idea; I also don’t know if it is accurate. Using reflected rather than transmitted light, and not having wavelengths that are chosen specifically for their use in oximetry, not having any idea if they can account for methaemoglobin or carboxy- or fetal haemoglobin etc etc.

I can imagine many people finding a saturation a bit low, like mine, will freak out and phone their doctor or go to the Emergency Room, and not just shrug it off like me as being probably inaccurate. We don’t need extra pressure on medical services right now! I read something about it perhaps being useful to detect sleep apnea, but for that, you would have to have it in continuous mode (if that exists) and wear it while you are asleep, which my watch battery would have a problem with, it would be very low the next morning.

The manufacturers, of course, come up with some weasel words about ‘not being intended for medical use, including self-diagnosis or consultation with a doctor, and are only designed for general fitness and wellness purposes’ but that just sounds like the usual get-out-of-jail free statements that health supplements use.

Now, how can I get my saturation higher? Maybe if I take high dose vitamin D, or find somewhere to insert a jade egg… hmmm.

Posted in Neonatal Research | Leave a comment

Does LISA protect your brain?

A few years ago now a multicenter RCT among infants of 23 to 26 weeks gestation showed that LISA was possible in even these most immature infants., NINSAPP.

Kribs A, et al. Nonintubated Surfactant Application vs Conventional Therapy in Extremely Preterm Infants: A Randomized Clinical Trial. JAMA Pediatr. 2015;169(8):723-30. 211 infants were randomized if they were stabilized on CPAP at 10 to 120 minutes of age and were needing more than 30% oxygen. The original publication was a “negative trial’ in that the primary outcome (survival without BPD) was not very different between groups; although more frequent with LISA than with intubation for giving surfactant (67% compared to 59%) the risk difference of 8% could have been due to chance (95% compatibility limits 21% reduction to 5% increase of “death or BPD” with LISA).

One finding of the study was that almost all of the 23 and 24 week babies randomized to LISA were intubated later (14/15 at 23 weeks and 24/26 at 24 weeks), as well as 3/4 of the 25 week and 1/2 of the 26 week infants. The eventual median duration of mechanical ventilation was therefore only 2 days different between the groups. Of note the incidence of severe intracranial bleeding ‘grade 3 and 4 IVH’ was 22% among the controls, and of cystic PVL was 11%. Both of these frequencies are very high and were much higher than the LISA group, 10% for severe IVH and 4% for PVL. In recent years in the Canadian Neonatal Network the combined incidence of severe IVH and PVL has been between 17 and 19% for babies of 23 to 26 weeks GA, even allowing for some overlap in the NINSAPP babies some of whom might have had IVH and PVL, their frequency of serious brain injury was much higher among the controls than among our intubated babies. Did they by chance have a group of controls who had more brain injury than usual? Or was it truly the impact of LISA? Or was it because the routine was to perform intubation without pre-medication in the control group? (Which causes major hemodynamic fluctuations and is much more likely to need multiple intubation attempts).

It is hard to imagine how the occurrence of cystic PVL would be affected so dramatically by 2 fewer days of mechanical ventilation.

Long term follow-up of the infants has just been published (Mehler K, et al. Developmental outcome of extremely preterm infants is improved after less invasive surfactant application (LISA). Acta Paediatr. 2020.)  156 babies were evaluated at 2 years corrected age (86% of the survivors). Strangely these data are 5 years old now, as babies were recruited up to 2012, so the last follow up would have been in 2015.

The primary outcome of the follow-up study is vaguely defined as “neurodevelopment” and refer to the Bayley version 2, but do not mention a neurological exam.

Disability was defined if the mental development index (MDI) or psychomotor development index (PDI) was <85 but ≥70, severe disability was defined for MDIs or PDIs <70. Indices between 85 and 115 indicated normal development, indices >115 were defined as development above average. Developmental delay referred to any MDI or PDI <85.

This is the main table of the results which shows a very high frequency of what they call “severe disability” among infants of 25 and 26 weeks GA randomized to the intubation group. Firstly, I would like to re-iterate, a low score on a Bayley is not a disability. The Bayley Scales of Infant Development are a screening tool meant to identify children who require further evaluation, many of those with low scores at 24 months do not have impairments when evaluated later.

A systematic review of LISA was published in 2017, it included 6 trials Aldana-Aguirre JC, et al. Less invasive surfactant administration versus intubation for surfactant delivery in preterm infants with respiratory distress syndrome: a systematic review and meta-analysis. Arch Dis Child Fetal Neonatal Ed. 2017;102(1):F17-F23) and  did not show an impact of LISA on intraventricular haemorrhage or PVL, but the other trials included very few infants at high risk, mostly specifically excluding infants below a certain GA.

Is it possible that LISA (or alternatively MIST, minimally invasive surfactant treatment) protects the brain? I would say that the data from NINSAPP are unconvincing; it was a well-performed study, but was too small, with an unusually high incidence of brain injury in the control group, and non-optimal intubation practices in those infants. Slightly delaying intubation in 23 and 24 week infants may have some benefits, and performing the procedure after the early perinatal hemodynamic changes. But it seems to me inherently unlikely that such a big difference in ultrasound brain inujry findings and in longer-term developmental scores would result from avoiding intubation overall of 25% of mostly 25 and 26 week infants and a median of 2 fewer days mechanical ventilation.

It would be good to be proved wrong (I think that happened once before😁). I am afraid to say it, more research is needed, to confirm or question these findings.

Also, and I know most of us are too busy and there are mutiple reasons why publications get delayed, but reporting dramatic differences in outcomes 5 years later does a disservice to our community, earlier reporting of these results could have helped to ensure that other trials get funded, and that other researchers include longer term neurological and developmental outcomes in their study designs.

Posted in Neonatal Research | Tagged , , | 1 Comment

Delayed cord clamping in the very preterm

I haven’t written about this issue in a while, the APTS trial, and the systematic review which was published at about the same time appeared to show definitively that there was a reduction in mortality with delayed clamping compared to immediate clamping in very preterm infants. The mechanism is still uncertain; individual common causes of mortality are not clearly affected by delayed clamping, NEC, late-onset sepsis, severe intracranial bleeding, lung injury (as defined by O2 need at 36 weeks) are not different in most of the trials and are not affected in the meta-analyses, including the Cochrane review, so how delayed clamping decreases mortality remains a question.

Delayed cord clamping should be standard of care for very preterm, moderately preterm, late preterm and full-term infants. In other words, for everyone. In full-term infants, there is no impact on mortality, of course, but iron status and developmental outcomes are improved.

The majority of the evidence with regard to preterms is from studies with delayed cord clamping in which the umbilical cord was clamped early if the infant was considered to need immediate intervention. The alternative, more physiological approach, clamping delayed until after breathing is established, has a lot to recommend it, from a physiological and animal research base, but in terms of a clinical evidence-base, the extra equipment and training required to be able to give positive pressure ventilation while the baby is still attached to the placenta, has not yet been clearly shown preferable. In fact, I think one of the benefits of delayed cord clamping is that it keeps people like me away from the baby, I have to stand far from the baby wielding my laryngoscope in my right hand and the face mask and T-piece resuscitator in the other, while the baby makes some spontaneous efforts, the obstetrician suctions the airway, the baby wriggles around and no one tries to take the heart rate or place a pulse oximeter. To be less facetious, I think negative intrathoracic pressure from spontaneous respirations has much to recommend it over positive pressure from an external source. Although data from lambs suggests that inspiratory efforts may actually decrease umbilical venous blood flow.

One outcome which was not included in the currently available SRs (including the Cochrane review) is the long term results of neurological and developmental outcomes. I am not suggesting that the studies should have examined “death or disability”! If mortality is decreased, then it would have to be an at least equivalent increase in very profound disability to be able to counter-balance the improved survival, to my mind, and therefore to have an impact on the decision to institute universal delayed clamping.

That would be a truly surprising result if it occurred, and unique in the history of neonatology, almost all of our patients have a quality of life which is somewhere between acceptable and excellent. An intervention which increased survival, but only of patients whose quality of life was worse than being dead, has never happened.

The longer-term outcomes of the CORD PILOT trial were published this year. (Armstrong-Buisseret L, et al. Randomised trial of cord clamping at very preterm birth: outcomes at 2 years. Arch Dis Child Fetal Neonatal Ed. 2020;105(3):292-8). This is the follow up of a delayed clamping trial where the initial stabilisation procedures were supposed to take place with the cord intact. (Duley L, et al. Randomised trial of cord clamping and initial stabilisation at very preterm birth. Arch Dis Child Fetal Neonatal Ed. 2018;103(1):F6-F14).
The intention was that the intervention group babies would be placed on a flat surface right next to the mother, and initial steps of the NRP performed before clamping the cord, which was planned to be after at least 2 minutes. Babies were eligible if they delivered before 32 weeks; only a few were the most immature <26 weeks (n=35). Some of them were intubated while still attached to the cord, and one even had an umbilical catheter inserted before cord clamping.

The planned clamping delay actually happened in almost 60% of the babies randomized to that group. The remaining 40% were clamped earlier, about half of them because the cord was too short, and in 12 cases because of a “clinical decision”; the remaining who had immediate clamping in the delayed group were for largely unavoidable reasons, such as the baby being born with the placenta, or with a large abruption, or a rupture of the cord. There were about 260 babies overall, half with planned delayed clamping and half with clamping within 20 seconds. Those in the delayed clamping group who actually had their clamping delayed were mostly clamped soon after 2 minutes, and almost all by 3 minutes, with a small number of later outliers.

The initial publication of this trial showed that delayed clamping led to fewer blood transfusions and somewhat lower rates of late-onset sepsis and lung injury. Mortality was lower in the delayed clamping group, 7 deaths vs 15, with wide confidence intervals, of course (and mortality among the babies of 28 weeks and more in the immediate clamping group seeming to me to be on the high side, perhaps skewing the results).

The longer-term outcomes among the approximately 80% of babies with data at 2 years of age (either the Ages and Stages questionnaire or a Bayley assessment) were very similar. Some small differences were generally in favour of the delayed clamping group. It isn’t clear from this follow-up publication how many of the infants actually had delayed clamping. Although intention-to-treat analyses are, appropriately, the standard for evaluating the impact of an intervention in the real world, pilot studies often also have a “per-protocol” analysis to try and determine the impact of the intervention itself, isolated from other issues which may impede the performance of the intervention. It would be nice to know how many of the delayed clamping follow-up group actually had delayed clamping, and whether that was associated with better scores.

When you put together the small, possibly random, difference in mortality, with the small, possibly random, differences in some developmental scores, you end up with a very unhelpful conclusion “Deferred clamping and immediate neonatal care with cord intact may reduce the risk of death or adverse neurodevelopmental outcome at 2 years of age for children born very premature.” Here is the table with the details of the primary outcome:

I really don’t think that that sentence is of much use to anyone, even if it is strictly scientifically accurate. What would be better? “Deferred clamping and immediate neonatal care with cord intact showed a potential advantage in terms of survival, and not much difference in terms of developmental outcomes” that sentence is also scientifically accurate, and, I would suggest more honest and useful.

Posted in Neonatal Research | Tagged , , | 1 Comment

Diagnosing seizures in the newborn: a small step forward

The use of continuous EEG has become much more frequent in the NICU in recent years. It has become clear that clinical recognition of seizures, both those with and without clinical convulsions (which I will call electrographic seizures for all identified episodes, convulsions when there are clear motor phenomena, and non-convulsive seizures for those without), is poor. In at-risk infants, with clinical observation alone we fail to diagnose a large proportion of electrographic seizures, as many as 50% of convulsions are not identified, and all non-convulsive seizures. In addition, at-risk infants are often treated with anticonvulsants for episodes which are not electrographic seizures.

Even when prolonged EEG monitoring is used, many seizures are missed, and many babies receive unnecessary anticonvulsants.  Even more disheartening, experts reading prolonged EEG often fail to agree about whether an episode is a seizure or not! In one study for example (Stevenson NJ, et al. Interobserver agreement for neonatal seizure detection using multichannel EEG. Ann Clin Transl Neurol. 2015;2(11):1002-11), when seizures were shorter than 30 seconds there was only about 45% agreement between neurologists expert in neonatal EEG interpretation, just under 70% for seizures between 30 and 60 seconds duration, and even over 60 seconds duration agreement was a little less than 90%. There was much more agreement for portions of the record without seizures.

It was that profile of findings that led the group who just published this study (Pavel AM, et al. A machine-learning algorithm for neonatal seizure recognition: a multicentre, randomised, controlled trial. The Lancet Child & Adolescent Health. 2020) to take as their “gold standard” for the presence of electrographic seizures, when 2 experts found seizures and they overlapped for more than 30 seconds.   In this randomized controlled study, 264 term babies at risk for seizures either were monitored with regular continuous EEG, using a 9 electrode montage for up to 100 hours, or the same type of EEG hooked up to a PC running a seizure detection algorithm. 25% of the algorithm babies and 29% of the controls were finally classified as having electrographic seizures, which was lower than the pre-trial estimate of 40%, leading to an increase in sample size.

The primary outcome was the diagnostic accuracy of the clinical team in determining the presence of seizures compared between those using the ANSeR system, and those using plain vanilla EEG. Standard EEGs traces were displayed continuously at the bedside, and aEEG traces also. With the new algorithm ANSeR system, there was an audible alarm whenever the seizure probability was over 0.5, and a red line appeared on the EEG trace.

The sample size was calculated based on an increase of 25% in the sensitivity of the clinical team in diagnosing “true” electrographic seizures (i.e. those confirmed by the electrophysiologist experts).

This trial is a rather heroic undertaking, there are so many unknowns that designing such a trial must have been very difficult. There is a big difference between a) diagnosing which babies have had at least one seizure and b) diagnosing each seizure. We might, for example, already, without the algorithm, be relatively efficient at determining which baby has had a seizure, but very poor at counting how many seizures they have had. In fact, that is sort of what they found.

With or without the algorithm, the clinicians identified over 80% of the infants who truly had seizures. With and without the algorithm there were quite a few babies who were thought to have seizures who did not actually have them.

In contrast, when it comes to identifying when a baby is actually having a seizure, the algorithm was clearly better, whether the baby was having a few short seizures, or prolonged or repeated episodes.

Overall, the sensitivity for detection of individual seizures was 66% with the algorithm and 45% without, a difference of 21% (95% intervals 3.6-37%). Some babies in each group who never had a seizure were nevertheless treated with an anticonvulsant (10% vs 4%).

The authors also noted that the algorithm had a bigger impact at weekends compared to weekdays. 17% improvement in seizure detection on weekdays, and a 37% difference during the weekend.

This certainly looks more useful than previous seizure detection algorithms which are used in newborns but were initially designed for adults. According to a statement in the “research in context” box, the ANSeR system did not lead to more anticonvulsants being given, but I can’t find anything in the results about that. They note that there were some babies in each group who received seizure medication they may not have needed.

No significant differences were found between the groups regarding the secondary outcomes of seizure characteristics (total seizure burden, maximum hourly seizure burden, and median seizure duration) and percentage of neonates with seizures given at least one inappropriate antiseizure medication (37·5% [95% CI 25·0 to 56·3] vs 31·6% [21·1 to 47·4]; difference 5·9% [–14·0 to 26·3]).

I was hoping that this trial would show that the ANSeR system would efficiently discriminate between infants with and without seizures, allowing much better targetting of anticonvulsants. Unfortunately, that did not happen. It does, on the other hand, allow much better detection of individual episodes. Widespread use of the system would likely, therefore, lead to more seizures being appropriately detected. I presume there will be a publication about how the administration of anticonvulsants was affected, in much more detail than just the proportion of babies who received drugs they did not necessarily need. It would interesting to see whether doses were escalated, and second anticonvulsants added, more appropriately in the algorithm group than the controls. Whether the use of this system will lead to better long term outcomes remains to be seen, but is apparently being investigated by follow up of this cohort.

Posted in Neonatal Research | Tagged , , | Leave a comment

Dexamethasone ENT doses

Although we have a great group of ENT surgeons at my hospital, we do have one bone of contention; at least, there is just one bone left since they have agreed that you cannot diagnose reflux by performing a laryngoscopy! See my post : you-cant-diagnose-reflux-with-a-laryngoscope/ The other issue is that when they see a patient at high risk for post-extubation laryngeal oedema and re-intubation risk they often request that we administer dexamethasone, and usually in industrial doses.

I have numerous questions about this practice,

  1. do steroids have clinical benefit in neonatal patients at risk of post-extubation stridor and re-intubation?
  2. is dexamethasone preferable to other steroids?
  3. what dose should we use?

Although this is a frequent practice, there are very few good data. A Cochrane review from 2009 (Khemani RG, et al. Corticosteroids for the prevention and treatment of post-extubation stridor in neonates, children and adults. Cochrane Database Syst Rev) by the Cochrane Airways Group found 6 adult (n=2000), 3 paediatric (n=206), and 2 neonatal trials (n=104), with variable steroid doses. A more recent systematic review in adults found another 6 trials, and a recently published protocol for a paediatric RCT refers to 2 more recent small paediatric trials. I haven’t found any more recent neonatal trials, but the Cochrane review from the neonatal group included additional data from an earlier trial, for which the extubation data were only ever published as an abstract, and which included an additional 52 babies.

The steroid types and doses that were used in the adult studies vary between 100 mg of hydrocortisone, to two 40 mg doses of methylprednisolone, to a single 5 mg dose of dexamethasone to a maximum of 5 mg of dexamethasone every 6 h, 4 times. The latter of the adult systematic reviews divided the trials into those among high-risk patients (determined by a cuff leak test), and unselected patients. They showed a reduction in re-intubation among high-risk patients and performed a meta-regression to examine the effects of steroid dose, which they counted according to “hydrocortisone equivalents”. That analysis showed no impact of the dose of steroid on their efficacy in reducing the need for re-intubation, the lower doses were just as effective as the highest dose.

The studies in children (after the neonatal period) again used different doses, all of dexamethasone, the doses varied from 0.5 mg/kg one dose (maximum of 8 mg), 0.5 mg/kg one dose (maximum of 10 mg), 0.5 mg/kg given q6h for 3 or for 6 doses, all those studies were in children without identified increased risk. The only study in high-risk paediatric patients used 0.5 mg/kg q6h x 3, and doesn’t state a maximum dose, it actually showed no major benefit of dexamethasone in a small RCT (n=23).

The 3 neonatal trials used doses of 0.25 mg/kg once, 0.5 mg/kg once and o.25 mg/kg 3 times q8h. In terms of our discussion for today, only one of those trials is relevant; two of them were in larger preterm infants with no airway concerns and were studying the routine use of dexamethasone for prevention of re-intubation after at least 48 to 72 hours of intubation. The only relevant study is from 1992, studied 50 preterm infants considered at high risk for airway compromise, and showed a reduction in re-intubation from 0/27 to 4/23, (RR=0.1, 95% 0.01-1.7) that study used the highest of the 3 cumulative doses.

What then is the scientific evidence-based answer to our 3 questions?

  1. Do steroids have clinical benefit in neonatal patients at risk of post-extubation stridor and re-intubation?

For neonatal patients specifically, and in those whom you would consider treating, i.e. with previous extubation failures and/or known airway problems, the answer has to be “not proven”. The tiny amount of directly relevant data precludes an evidence-based answer. In older children there is similarly very little data.

2. Is dexamethasone preferable to other steroids?

Neonatology: only dexamethasone ever studied. Paediatrics: only dexamethasone ever studied. In adults dexamethasone and methylprednisolone have been studied in higher-risk patients, hydrocortisone only studied in standard-risk patients. The answer then is ¯\_(ツ)_/¯. Methylprednisolone seems to be as effective as dexamethasone in adults, but because hydrocortisone has not been studied in a high-risk group it is not clear whether it is as effective in such patients.

3. what dose should we use?

Again the evidence-based answer is that there is no evidence, but in adults lower doses are as effective as higher doses.

The doses used in neonatal studies, and suggested for ENT use in clinical practice in my experience, are enormously higher than those shown to be effective in adults. A 5 mg total dose for an adult could be anywhere between 0.1 mg/kg and 0.02 mg/kg, to use a reasonable range of adult weights. The highest dose regime ever studied in adults gave 20 mg/day of dexamethasone, or a maximum of 0.5 mg/kg/day in a tiny 40 kg adult. The average per kg dose of this extremely high dose regime would be about 0.25 mg/kg/day divided into 4 doses if your adults average 80kg. In adults the variety of doses studied, all much lower than neonatal doses, showed no correlation between dose given and efficacy. Indeed among trials studying high-risk adults, the relative benefit was almost identical regardless of the dose used.

There is very little surveillance for adverse effects reported in the RCTs. Some of the adult trials have reported low rates of hyperglycaemia and of GI bleeding, but those, of course, used much lower doses.

The data from adult studies suggests a benefit of steroids for post-extubation laryngeal oedema; if I were to put money on it, I think it is likely there is some benefit in reducing post-extubation laryngeal oedema in neonates and probably reducing some clinical impacts, whether they are effective enough to prevent some re-intubations is impossible to say.

Many of the babies that I see who have serious upper airway problems, and for whom we consider dexamethasone for extubation, have already received steroids, sometimes more than one course and occasionally over a prolonged period. Adding another blast of extremely high doses of this medication, associated with significant long term worse outcomes, is often very worrying. Dose matters (Wilson-Costello D, et al. Impact of Postnatal Corticosteroid Use on Neurodevelopment at 18 to 22 Months’ Adjusted Age: Effects of Dose, Timing, and Risk of Bronchopulmonary Dysplasia in Extremely Low Birth Weight Infants. Pediatrics. 2009:peds.2008-1928), this study from the NICHD network showed the following, referring to postnatal dexamethasone use in very preterm babies:

Each 1 mg/kg dose was associated with a 2.0-point reduction on the Mental Developmental Index and a 40% risk increase for disabling cerebral palsy.

In summary, there is very little good relevant evidence, but to give my best-guess clinical implications of this review:

  1. steroids might be effective in reducing upper airway oedema after extubation in newborn infants at high risk of airway compromise, and could possibly reduce extubation failures,
  2. any steroid with glucocorticoid action might be equally as effective,
  3. there is no evidence to support the enormous doses that are often prescribed.

I would suggest that a dose similar to the DART starting dose of 0.15 mg/kg/day of dexamethasone is still well within the range of doses shown to be effective in adults, and can be stopped very quickly after extubation if there are few signs of airway compromise.

The less we give the better: reducing the dose, shortening the duration, and targeting the babies most likely to benefit are essential.

 

Posted in Neonatal Research | Tagged , , , | 1 Comment