Worst Pills, Best Pills

An expert, independent second opinion on more than 1,800 prescription drugs, over-the-counter medications, and supplements

How Ineffectual Medical Treatments Can Misleadingly Produce “Good” Results

Worst Pills, Best Pills Newsletter article August, 2009

The current critical look at the U.S. health care system has underscored the growing misuse and overuse of certain types of health care. As a result, an ever-expanding number of therapeutic tools are being examined to make sure they work.

Many drugs, devices, and procedures that are found to be ineffectual or even dangerous have seemed to work when initially introduced, or seemed to prove useful for selected populations. Consumers are therefore often puzzled when some therapies that are initially hailed as breakthroughs are later pronounced useless, even hazardous. What are the reasons this occurs?

There are a number of effects at play. These, alone or in combination, can confound the evidence gathered through trials or other types of assessments and make treatments seem better than they really are. Here, we describe some of these.

The placebo effect is defined as a “measurable, observable, or felt improvement in health not attributable to an actual treatment.” Placebos (from the Latin “I shall please”) are aimed more at pleasing or appeasing patients than at treating them. Yet placebos play an important role in clinical research. When a drug, device, or procedure is evaluated, the experimental group which receives the treatment is often compared to a control group which receives a placebo. In double-blind experiments, neither the experimenter nor the subjects of the experiment know who has is getting what. When an active drug is tested against an inert placebo, any significant difference in outcomes is seen as attributable to the drug. Less commonly three-way experiments are conducted in which similar subjects are divided among those receiving a treatment, those receiving a placebo, and those not receiving anything at all. These also allow comparisons between the latter two groups. Any difference between them can then be attributed to a placebo effect in those with the disease.

Though based on “sham” treatments, the placebo effect is quite real. Because it works through different mechanisms, there is not a single placebo effect but many. The medical literature grows by nearly 100 studies on the placebo effect every year. These studies have found that placebos can assuage pain, and alleviate depression, anxiety, Parkinson’s disease, inflammatory disorders and cancer. Placebos have been found to even shrink tumors. And the placebo effect may extend to persons other than those taking them, such as family or friends.

Traditionally, the placebo effect has been explained as the result of a patient’s expectations and beliefs affecting the course of the belief. But recent data show that the effect may arise from subconscious associations between recovery and the experience of being treated. Conditioning, in which the patient associates a treatment (or even seeing a doctor, a pill, or a syringe!) with feeling better, can also produce the placebo effect. A placebo can activate physiological processes, including immune responses, the release of hormones and the release of internally-produced pain relievers. These responses are triggered by active processes in the brain, which have been documented in animals and studied through imaging.

A team of German and Swiss scientists documented the placebo effect produced by conditioning in rats. The experimental animals were injected with an immunosuppressive drug (used to prevent the rejection of transplanted organs) at the same time they were fed sweetened water. The rats associated the two “interventions” to the point where feeding them the sweet drink alone weakened their immune systems. The researchers conclude that these findings “suggest that a placebo effect does not require that a person hope for or believe in a positive outcome.” Even more remarkable was the finding that the placebo effect had clinical significance: the conditioned rodents survived transplanted organs longer than their non-conditioned counterparts even when both groups had received the same “active” drug. A similar experiment carried out with a small group of humans found that comparable behavioral conditioning could mimic the immunological effects of an immunosuppressive drug.

Curiously, the price of a placebo has been found to affect the potency of its effect. In a study published in 2008, 82 healthy paid volunteers were given what they were told was a new opioid and asked to rate its effects on painful electric shocks. Although they were all given the same placebo, some were told it cost $2.50 a pill while others were told it cost 10 cents a pill. Those who got the more “expensive” pill had significantly greater pain reduction than those who got the “cheaper” pill. The researchers therefore suggest that clinicians harness “quality cues” in the treatment of patients, de-emphasizing factors that may devalue the efficacy of treatments (e.g., its low price, or generic nature) in the patients’ eyes. Similar differences between various placebos have been attributed to the size or the color of the pill.

Placebos have also been found to affect not only patients, but their caregivers as well. A recent review of studies of stimulant medications given to children with attention deficit-hyperactive disorder found that, when caregivers believed their ADHD patients were receiving medication, they tended to view the children more favorably and the treatment more positively, whether or not the children were actually receiving the medication. This may have benefits as well as drawbacks. To the extent that this may have led to a more positive interaction between patient and caregiver, the effect could be seen as favorable. But some caregivers may be misled into attributing the perceived behavioral changes to the medication, and thus increase its dosage.

The Hawthorne effect gets its name from a series of studies conducted among employees of a telephone-parts factory called the Hawthorne Works, located outside Chicago. The purpose of the studies, which were conducted in 1924, was to learn whether workplace lighting affected workers’ productivity. The major finding of the study was that the output increased whenever the lighting increased, but also when it was dimmed. The conclusion was that it was the change, rather than its nature or direction, which produced the effect. The finding was interpreted to mean that it was the workers’ awareness that they were being experimented on that altered their behavior. As a result, the fact that behavior tends to change when people know they are being monitored or watched has been called a “Hawthorne effect.”

A recent re-analysis of the Hawthorne data has cast doubt on the original interpretation of the findings. It turns out that lighting was always modified on a Sunday, when the plant was closed, and that output was high on Monday and tapered off during the course of the week. Moreover, the fact that productivity decreased when the experimentation ceased has been re-interpreted as a seasonal effect: the experiment ended in the summer, when output fell anyway.

Still, the “Hawthorne effect” has become part of the social science lexicon, and it has been found to actually operate in an array of cases, including medical phenomena.

Patients tend to change their behavior when they know they are under study and the more intense the scrutiny, the better they are likely to perform. This has been found in patients following prescribed treatments. Thus, for example, closer follow-up has been found to induce or nurture better adherence to medical orders.

The “healthy user” effect has been at play in some significant therapies that were hailed as beneficial before being discarded as ineffectual or misguided. It has also led to the misinterpretation of some previous findings.

A major wake-up call for women and their health providers came in July 2002, when the results of prospective, randomized, Woman’s Health Initiative trials overturned findings from many other non-randomized studies. Prior studies had shown that women who had previously taken estrogen and progestin had less heart disease than those who did not. But the WHI study found otherwise: the large (more than 161,000 women) clinical trial found that those who were randomized to take the drugs had slightly more heart disease, as well as an increased risk of breast cancer. These findings were significant enough to prompt the discontinuation of the trial. The findings of the initial studies were attributed to a “healthy user” effect: women who choose to take any medication for years, regardless of what it is, are different from those who do not. The comparison between users and non-users of estrogen and progestin was therefore inadvertently comparing conscientious, health-savvy women with women who were less concerned or informed about their health. As pointed out in an article published in the British Medical Journal, “drug treatment may be a surrogate for overall healthy behavior,” and those who adhere to healthy lifestyles may possibly “also tend to take care of themselves by greater adherence to prescribed treatments.”

The novelty effect refers to the impact of a new treatment or intervention simply because it is unique or new. It is therefore not the treatment itself that brings about a change, but rather its novelty: the person receiving it may be reacting to it solely because it is new and different.

These phenomena can be useful. But they can also play tricks on patients, practitioners, and researchers alike, complicating the evaluation of therapeutic impacts and making it difficult to disentangle causes and effects. As seen in the case of hormonal treatment for the menopause, what was originally thought to be useful and disease-protective drugs turned out to be extremely harmful.