Some of us may have heard of the problems with the data concerning SSRIs (a commonly used class of antidepressants) and, in particular, young people. It appears that drug companies hid and distorted data from controlled trials, in order to make it appear that the data were positive, and hide the fact that there appeared to be an increase in suicides during the initial phase, at least, of treatment in younger patients.
One study in particular stands out, study 329. This study stands out because it was a very high profile study, of which many questions have been asked, but also because Glaxo Smith Kline eventually agreed to allow complete access to all of the original patient data.
That access has resulted in a new publication in the BMJ, which shows that, in contrast to the original publication, if the data are analyzed as the original protocol specified, and if all the adverse outcome data were appropriately collated, then there was no benefit from the use of the agent in question, paroxetine. Indeed the results show no benefit but an increase in serious adverse events, including suicide attempts.
The original publication reported that the outcomes were positive, but it seems they chose outcomes that appeared positive that were never specified in any of the protocol versions. There was also incomplete, very incomplete, summation of adverse outcomes, leading to erroneous claims of safety.
How could all of this happen? Well the data were all controlled by the company (a predecessor of GSK) who organized the analyses, and employed a ghost writer to draft the first version of the paper. The physicians who eventually were listed as the authors edited the introduction and discussion (making what were mostly “cosmetic” changes). There is a fascinating case study (in a journal I wasn’t aware of called “accountability in research”) of the way this article made it to submission and publication, which contains the following “gems”
The first draft contained a significant distortion of outcome whereby the list of primary outcomes was expanded from two to eight, four of which separated paroxetine from placebo. This change gave plausibility to the claim that “paroxetine is effective.”
(‘Laden’ is the second name of the ghost writer)
Our analysis of the progression of drafts shows that there are few substantial differences between the final published article and the first draft prepared by Laden (Jureidini, 2007). Large portions of the introduction and discussion were re-written, but these changes add little to the substance of the article, and most other changes are little more than copy editing. Throughout the many drafts of this article, the conclusion persists that paroxetine is safe and effective for adolescent depression despite the fact that it failed on both primary and most secondary outcome measures.
That review of the initial publication process also noted that the serious adverse events were downplayed. The new re-analysis of all the original data shows that they weren’t even counted correctly; there were several instances of breaking the blinding of the medication, and coding of the SAE’s varied from one case to another, even when similar events occurred. The evaluation of whether the SAE was related to the medication was sometimes made after the blind had been broken. So, for example, one case of an SAE was ascribed to the study intervention after it was known that the patient was in the placebo arm.
Some of the reported SAE’s were never even transcribed into the study data sheets.
The authors of this re-analysis also note the difficulties with actually performing it, they were allowed a single remote desktop to access files, were not permitted to print anything, and had huge difficulties getting it done as a result. After many thousands of hours of work, I think we can all be grateful that this re-analysis, which could be considered a case study in how to re-analyze publicly available datasets, and also an object lesson in why such datasets should be fully available to the appropriate persons, and not just the extremely limiting access that these authors had, that this re-analysis was as scrupulously done as this.
If anything points out the desperate need for the Alltrials campaign to be successful, it is this publication. How many other articles have been distorted by the sponsors to make their drug seem effective and safe, when in fact they are neither?
In a commentary in the BMJ, accompanying the publication, Peter Doshi is scathing about the response of the journal where the article was originally published, the professional association responsible for the journal, and the university where the first author (that’s the person who is listed as first author, not the person who wrote the article!) works; where he is still head of psychiatry.
The article is still in the literature, still with the same conclusions of efficacy and safety. GSK, you might remember, were sued 3 billion dollars for promoting paroxetine beyond the approved indications, including promotion for adolescents, partly based on this article.
The original article is an astounding betrayal. The authors betrayed the individuals and families who consented, the public in general, including all the subsequent adolescents who have been treated with paroxetine, and the whole world. The big phamaceutical companies are hugely profitable, but they also make huge investments to bring new molecules to market, which subjects them to enormous pressures, to make results of clinical research seem as positive as possible, and minimize the negatives. When the ‘positives’ and ‘negatives’ are the health of patients, the implications are profound. Far more so than for a new children’s toy, or new scent for a perfume range; but the same corporate thinking motivates their actions.
We cannot escape corporate funding for trials of new therapeutic agents, we have to put in place mechanisms to ensure that those trials are well designed, adequately analyzed, and appropriately reported.