Worst Pills, Best Pills

An expert, independent second opinion on more than 1,800 prescription drugs, over-the-counter medications, and supplements

Research as Public Relations: Antidepressants and Suicide in Youth

Worst Pills, Best Pills Newsletter article October, 2007

It seemed like déjà vu this September, as dramatic headlines again linked suicide and antidepressant use in youth. Three years ago, the weight of evidence seemed to point in the direction of increased suicide as a result of antidepressant use. In light of this, federal regulators at the Food and Drug Administration (FDA) required a “black box” label for all SSRI (selective serotonin reuptake inhibitor) antidepressants indicating that use in children could lead to an increased risk of suicidal...

It seemed like déjà vu this September, as dramatic headlines again linked suicide and antidepressant use in youth. Three years ago, the weight of evidence seemed to point in the direction of increased suicide as a result of antidepressant use. In light of this, federal regulators at the Food and Drug Administration (FDA) required a “black box” label for all SSRI (selective serotonin reuptake inhibitor) antidepressants indicating that use in children could lead to an increased risk of suicidal behavior. Now comes a study published in the prestigious American Journal of Psychiatry (Volume 164, pp. 1356-1363) purporting to show, in effect, the opposite: the FDA warnings had caused the rate of pediatric SSRI prescriptions to plummet and as a result young people are killing themselves due to lack of treatment. If this were true, it would be a clear example of the unintended consequences of regulation.

But first, let’s turn the clock back to the summer of 2003. The FDA had just warned doctors of an increased risk of suicidal thoughts and behavior in children on fluoxetine (Paxil). By October, the FDA publicly acknowledged that other antidepressants might have the same propensities and requested all unpublished data from SSRI makers, some of which had been hidden for years from the public. After discrediting the findings of Andrew Mosholder, its own drug-safety expert, the FDA commissioned a team from Columbia University to reassess the data. Almost a year later, the academic team came to the same conclusion as Mosholder: SSRI use in children and adolescents increased the risk of suicidal thoughts and behavior two-fold. After an emotional hearing in late 2004, the FDA issued a “black box” warning for all SSRIs for children, its strongest possible labeling change. 

The AJP study addressed the interesting question of the impact of this regulatory action on the public health. Since then, said the study, the rate of SSRI prescriptions to children has declined, and Centers for Disease Control and Prevention (CDC) data showed a spike in youth suicides. Child psychologists lined up to inform readers that, overall, antidepressants were helpful for the majority of children, even if they hurt a few. But there is hardly a consensus on SSRI efficacy in children. Only one SSRI antidepressant, Prozac, is approved by the FDA for pediatric use. Experts including the lead author of the new study, Dr. Robert D. Gibbons, and the director of the National Institute of Mental Health (NIMH), Dr. Thomas Insel, blamed the FDA warnings for the subsequent drop in antidepressant prescribing to youth and the sudden rise in youth suicide. The Washington Post characterized their remarks as saying that the evidence “leaves few other plausible explanations.”

Glossed over in this spate of stories was the evidence itself. The study simply juxtaposed two data sets over time: a 22 percent drop in the SSRI prescription rate in children 0 to 14 years old from 2003-2005 and a 14 percent increased rate of suicide in children aged 5 to 19 from 2003-2004. The first rate went down, and the second went up, observed the authors. Therefore, as Gibbons said, the study was part of a “very cohesive story” that suggested one had caused the other. 

All studies are not created equal
Unfortunately, it’s not quite that simple. There is a distinction in public health between data that are linked to specific individuals and those that only describe the population being studied. Relationships that can be demonstrated at the aggregate level may not be true at the individual level because it is impossible to track individual patients in such studies. For example, there is no way to know that the group who would have received antidepressants (in the absence of FDA warnings) was the same group that committed suicide. Conversely, the children who committed suicide could have been taking antidepressants at the time, even as fewer patients were being prescribed the drug. The coincidental movement of population-level SSRI prescription rates and suicide rates is also complicated by, among other things, demographic and socioeconomic factors, societal trends over time and individual differences in the various groups who received antidepressants.

While absolute certainty is rarely achievable even in human trials that randomly assign patients into treatment or control groups and then follow the individuals over time, such trials are vastly superior to the aggregate population rate approach. Considered the “gold standard” in the clinical trial field, randomized control trials (RCTs) are compelling because all factors that might complicate data interpretation should be equally distributed between the treatment and control groups as a result of randomization. Not surprisingly, therefore, the FDA chose to use a meta-analysis (a statistical combining of individual studies) of 24 RCTs as the basis for its warnings of increased suicidality in young SSRI users. It is highly unlikely that they would have relied on aggregate population data to make so far-reaching a label change.

Further, suicide has myriad interrelated causes and rates can vary substantially over the short-term. Isolation of a single factor to explain a population increase in suicide of 14 percent is hazardous unless the factor is overwhelmingly influential or only one variable at a time changes. The latter is the case in RCTs in that the only variable that is different between the treated and control groups at the beginning of the study is the treatment itself. Numerous studies based on dozens of published and unpublished RCTs have resulted in the FDA’s conclusion that, of the SSRIs, only fluoxetine performs better than sugar pills for children under 18, although doctors can legally prescribe other SSRIs in children.  Population-level studies have little to contribute, particularly when they are inconsistent with the well-designed RCTs. In short, the FDA’s decisions were ultimately based upon much stronger evidence of the causal relationship between SSRI use and suicide than the AJP study. 

Given these crippling methodological problems, one at least expects the numbers themselves to have been accurately presented. Sadly, even this was not the case. In fact, there was a decrease of only a few percentage points in SSRI prescription rates between 2003 and 2004, with the majority of the decrease occurring after that. Unfortunately, data on suicide were only available from the CDC through 2004. Thus the drop in prescription rates happened mostly after the demonstrated rise in suicides. 

Even the age groups in the two data sources didn’t coincide. The suicide rates were observed in children aged 5 to 19, but the widely quoted SSRI prescription rate drop of 22 percent applied only to children 0 to 10 years of age. The drop in SSRI prescription rates from 2003-5 was actually around 15 percent for children aged 10 to 14 and even less for teenagers 15 to 19 years old. The study does not report actual prescription numbers, but previous research shows significantly lower SSRI use in children younger than 10 years compared to 10 to 20 year olds. This suggests that the actual drop in the number of SSRIs to youth overall is considerably smaller than 22 percent.

In a highly unusual move that hinted at the presence of strong criticism, the New York Times wrote a counterpoint to its original story on this study two weeks after the paper was published. Entitled, “Experts question study on youth suicide rates,” the article pointed out some of the study’s methodological flaws that had eluded the Times in its initial article and emphasized the complexity of the debate. 

The public health paradigm demands that decisions to prevent disease and promote health be made based on the best evidence available.  Clearly, the latest “evidence” does not hold a candle to the meta-analysis of 24 randomized controlled studies that the FDA used to issue its original warnings. So why did this study gain so much attention?

Bad Faith
Prescribing antidepressants to youths had been increasing steadily since the 1980s, and accelerated further in the late 1990s, particularly for children. After the FDA warnings, the drop in SSRI prescribing for children represented an alarming fall in pharmaceutical industry revenue and a deviation from the expected meteoric rise in such prescribing. Clearly, it would be in the industry’s interest to counter any perception that SSRIs were dangerous. Certainly, they had spared no effort in the past: heavily choreographed testimony before the FDA, to say nothing of the suppression of studies showing the drugs’ dangers and ineffectiveness.

For two of the three industry-funded authors of the AJP study, the study’s clear limitations seemed to offer few constraints. Dr. J. John Mann said, “The most plausible explanation is a cause and effect relationship: prescription rates change, therefore suicide rates change.” Oddly, in the same article, Gibbons admitted that the data “did not support a causal link” but continued: “this study was suggestive, that’s what we’re saying.” The six-page study itself barely spent more than a page on the data in question, and was instead devoted to the restatement of other population-based studies.

Viewed in this context, the paper and its subsequent publicity appear to be little more than a public relations ploy. The editors of the AJP should not have allowed such gross misrepresentations to pass into print unscathed, and journalists who cited this study as if it deserved equal credence to the RCTs are just as guilty.

Can’t take the heat
Federal regulators were compelled to respond following the study’s widespread media coverage. Their responses afford little confidence that prominent regulators are any more able to identify overblown findings than their counterparts at the AJP or in the popular press. Dr. Insel of the NIMH said, “We may have inadvertently created a problem by putting a ‘black box’ warning on medications that were useful. If the drugs were doing more harm than good, then the reduction in prescription rates should mean the risk of suicide should go way down, and it hasn’t gone down at all – it has gone up.” Dr. Thomas Laughren, director of the Division of Psychiatry Products at the FDA, thought that more data over time “linking declines in prescriptions to suicide risk” would be enough cause for the FDA to revisit its black box labeling decision.  While revision of regulatory warnings is sometimes necessary, to do so based upon studies like this would simply be capitulation to a pressure campaign. 

While there is always room for debate on the effects and effectiveness of SSRIs in children, the debate has long since advanced past population-level studies like this one – let alone population-level studies with such egregious flaws and loaded with data misrepresentations. This study may be easy for press to understand, and its findings may be comforting for profit-minded drug companies and the physicians who have been prescribing these products, but that will be little consolation for the children who may receive these products as a result of the false reassurances doled out by this second-rate study.