JCLP 20129
JCLP 20129
The placebo has had a long and controversial history in medicine and psychotherapy,
with regard to its definition, the effects that it produces, and the research designs that are
employed to understand it (Shapiro & Shapiro, 1997). The term itself, according to Walach
(2003), originated from the Latin psalm verse, “ Placebo Domino in regione vivorum” (“I
shall please the Lord in the land of the living”) and was sung in the Middle Ages as a
Correspondence concerning this article should be addressed to: Bruce E. Wampold, Department of Counseling
Psychology, 321 Education Building—1000 Bascom Mall, University of Wisconsin, Madison, WI 53706; e-mail:
wampold@education.wisc.edu
JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 61(7), 835–854 (2005) © 2005 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/jclp.20129
836 Journal of Clinical Psychology, July 2005
prayer at the deathbed. Because others were often paid to do the singing, the term placebo
became associated with a “nearly fraudulent replacement of the real” (Walach, p. 178).
When it was recognized in the 18th century that most substances given by doctors to
patients were not helpful, the term placebo became associated with any substance that the
physician knew (or believed) was not remedial for the disorder but was given only to
please or satisfy the patient, thus continuing the connotation of fraud or deceit (Shapiro &
Shapiro, 1997). Modern medicine, striving to leave behind a history of healers labeled as
charlatans (e.g., Anton Mesmer) whose cures were affected by hope, expectation, or
remoralization, sought to demonstrate that the effects of medical treatments were not
merely “placebo effects” but were due to the treatment’s active ingredients. That is, the
active ingredients affected the body through direct, not mind-mediated, physiochemical
processes. Nevertheless, modern medicine has accepted the notion that placebos produce
effects, to varying degrees, but contends that medical treatments provide benefits over
and above what a placebo would provide. Indeed, for decades, Beecher’s (1955) estimate
that the administration of a placebo leads to significant improvements in approximately
one third of cases for which responses are subjective was accepted as truth. However,
with the exception of a small cadre of researchers intent on identifying and understanding
placebo effects themselves, the predominant thrust of medical and psychotherapy treat-
ment research has been on developing and testing treatments that produce effects beyond
what the denigrated placebo can produce.
Our goal in this article is to present an analysis of the placebo concept, from which
to understand the logic of research designs using placebo treatments, and to estimate the
size of placebo effects in medicine and psychotherapy, taking into account nuances of the
placebo effect and methodological considerations. This discussion will necessarily raise
several thorny issues that demonstrate the ambiguities which saturate understanding the
placebo, particularly as the concept of the placebo is transported from medicine to
psychotherapy.
1
Grünbaum (1981) uses the term constituents, whereas psychotherapy researchers are partial to the term ingre-
dient. We use the two terms interchangeably.
Placebo Effects in Medicine and Psychotherapy 837
Clinical Trials
they are often called in medical studies); the natural course of the disorder is portrayed to
indicate the various effects of this design. The experiment is designed to test if the spe-
cific treatment effect (treatment vis-à-vis placebo) produces a test statistic sufficient to
reject the null hypothesis of no differences. The logic is that if the active treatment is
shown to be superior to the placebo, which was designed to control for all incidental
factors, then the treatment is designated as being effective through the hypothesized
physiochemical pathway. When only two groups are used (i.e., treatment and placebo) as
is typically the case in medicine, it is impossible to estimate the placebo effect when the
natural course of the disorder is unknown, as shown in Figure 1.
There are several assumptions underlying the double-blind randomized placebo con-
trol design that defend the design against threats to validity. Of course, there are the usual
considerations related to randomization, attrition, and so forth. Critical, however, to the
validity of the design is that patients’ expectations, hope, and attributions of meaning,
which are established through the intertwined concepts of blinding and the ability to
distinguish the treatment, be comparable across the treatment and control groups. The
treatment and the placebo are indistinguishable if and only if there are no differences in
the two treatments that are apparent to the patient or the administrator of the treatment. In
a hypothetical experiment, treatment and placebo are indistinguishable if a person receives
both and cannot reliably discriminate between the two (in say, a Fisher’s lady tasting tea
paradigm; see Salsburg, 2001 for an interesting account). Here, indistinguishability is
necessary, but not sufficient, for the experiment to be blinded (e.g., the two treatments
could be indistinguishable, but the provider of treatment could be knowledgeable of the
treatment being provided).
Threats to validity posed by distinguishableness vary, depending on the nature of the
treatment and design of the experiment. In any experimental situation in which patients
are informed that they will be randomly assigned to conditions, the patients will attempt
to determine to which group they have been assigned. If the conditions are truly indis-
tinguishable, then no cues will be provided to the patients and all guesses are random,
thereby not affecting the results at the aggregate level. However, a host of factors pro-
vides information to patients that may influence their response to treatment. In pharmaco-
logical studies, side effects created by the active treatment provide cues that augment the
response to the treatment (i.e., subjects recognize by way of side effects that they have
been assigned to the treatment group and are receiving the “real” treatment; Greenberg &
Fisher, 1997; Kirsch & Sapirstein, 1998), so that although the appearance of the active
medication and the placebo are indistinguishable, they are not truly indistinguishable
because of their differential side effects. Physical procedures, such as chiropractics, involve
certain expectations of physical sensations; the absence of such sensations could threaten
validity because patients may use such cues to determine their assignment to groups and
the cues may attenuate their expectations for benefits. For example, Sanders, Reinert,
Tepe, and Maloney (1990) used a placebo condition that involved only light physical
contact, whereas the treatment group received adjustive manipulation at a specific lum-
bar region. Therefore, it is recommended that in future clinical trials, placebos that pro-
duce similar side effects to the active medication should be used to counteract act the
potential bias due to “unblinding” (Greenberg & Fisher, 1997; Moncrieff, Wessely, &
Hardy, 2004).
The ability to distinguish treatment exacerbates threats to validity when the nature
of the treatment and placebo are readily apparent to the provider. If the provider is
aware that a placebo is being administered, his or her belief in the intervention
will likely produce cues consistent with the attenuated faith in the treatment, whether
blatant or subtle, which unavoidably will be communicated to the patient. For example,
840 Journal of Clinical Psychology, July 2005
administered or not, given the myriad of procedures that occur during childbirth; conse-
quently, a placebo effect was unlikely (and of little concern to the researchers).
Despite the limitations of randomized clinical trials for detecting placebo effects,
such designs provide useful information for estimating placebo effects and comparing
those estimates to treatment effects, providing in some cases estimates of specific treat-
ment effects that are useful for understanding the nature of treatments. In the following
section, we will present meta-analytic research on clinical trials in medicine and
psychotherapy.
In 1955, Beecher, based on scant evidence, concluded that the administration of a placebo
leads to significant improvement in approximately one third of cases for which responses
are subjective; he titled his report The Powerful Placebo. Because the focus of medicine
at that time was on detecting specific treatment effects, in some sense the size of the
placebo effect was irrelevant, creating a disinterest in estimating the placebo effects,
particularly in medical clinical trials. However, in 2001, Hróbjartsson and Gøtzsche con-
ducted a meta-analysis of clinical trials published prior to 1999 that compared an active
treatment to a placebo treatment and to a no-treatment condition (i.e., all three arms
depicted in Figure 1 were included) to estimate the size of the placebo effect. They
included all trials of psychological as well as physical disorders and used the primary
authors’ designation that a placebo was used. Based on these very broad inclusionary
criteria, they made the following conclusion:
We found little evidence in general that placebos had powerful clinical effects. Although
placebos had no significant effects on objective or binary outcomes, they had possible small
benefits in studies with continuous subjective outcomes and for the treatment of pain. (p. 1594)
However, as we have seen, the clinical trial methodology is not well designed to
detect placebo effects and thus, as critics noted, there were many issues with Hróbjarts-
son and Gøtzsche’s study (e.g., Kirsch, 2002b; Kirsch & Scoboria, 2001; Moerman &
Jones, 2002; Papakostas & Daras, 2001). First, the Hróbjartsson and Gøtzsche meta-
analysis did not consider that placebos are not expected to work uniformly across dis-
eases or disorders (Kirsch & Scoboria, 2001; Shapiro & Shapiro, 1997). “Generally, the
presence of anxiety and pain, the involvement of the autonomic nervous system, and the
immunobiochemical processes are believed to respond favorably to placebo, whereas
hyperacute illnesses (i.e., heart attack), chronic degenerative diseases, or hereditary dis-
eases are expected to resist” (Papakostas & Daras, 2001, pp. 1620–1621). Clearly, there
are disorders for which the placebo effect should be large and there are also disorders for
which the placebo effect should be nonexistent or small. Aggregating without regard to
consideration of heterogeneity of disorders and their amenability to placebo action does
not allow for detection of a placebo effect should it exist. Thus, any analysis of the effects
of a placebo should differentiate between disorders that are amenable and those that are
not amenable to placebo treatment.
Another problematic aspect of the Hróbjartsson and Gøtzsche meta-analysis relates
to the lack of attention paid to the mechanism of the placebo. The placebo is a symbol of
the healing context and encompasses all aspects of the treatment that have significance
for the patient, including the patient’s and the physician’s beliefs and expectations. Spe-
cifically, the patient’s expectations regarding the efficacy of interventions are influenced
842 Journal of Clinical Psychology, July 2005
by multiple cues and are sensitive to the subtle aspects of the healing context, the prac-
titioner, and aspects of the clinical trial. As discussed above, if the patient is unaware that
a treatment (or the placebo) has been administered, or if the placebo is distinguishable
from the treatment—particularly if it is demonstrably inferior to the treatment—then it
would be expected that the placebo effect would be attenuated.
Reanalysis Method. We reanalyzed the studies used in the Hróbjartsson and Gøtzsche
study to account for the issues that are raised by estimating placebo effects in clinical
trials. Specifically: (a) conditions treated were classified based on their amenability to a
placebo treatment; (b) research designs were examined to determine whether the design
disadvantaged the placebo treatment or not; (c) the size of the placebo effect was com-
pared to the size of the treatment effect; and (d) placebo effects for subjective and objec-
tive measures were compared.
Effect sizes for each study were calculated in the following way. Except as noted
below, only the primary measure for each study was analyzed; Hróbjartsson and Gøtzsche
provided us with the designated measure and hence, the same measures were used in the
original meta-analysis as well as in the present meta-analysis. For each study, three effect
sizes were calculated: (a) treatment effect (treatment vs. no treatment), (b) placebo effect
(placebo vs. no treatment), and (c) specific treatment effect (treatment vs. placebo). Fol-
lowing the strategy of Hróbjartsson and Gøtzsche, we analyzed studies with continuous
outcomes separately from those with dichotomous outcomes.
For each study, two further categorizations were made. First, the degree to which it
was expected that a placebo would affect the disorder was determined. Five independent
raters (doctoral students in counseling psychology), blind to the results of the study, rated
the degree to which the disorder treated in each study could be affected by placebo
treatments by classifying each disorder as (a) definitely amenable to psychological fac-
tors (e.g., insomnia, chronic pain, depression), (b) possibly amenable to psychological
factors (e.g., acute pain, chemotherapy induced nausea, asthma), and (c) not amenable to
psychological factors (e.g., anemia, bacterial infection). In all cases, four of the five
raters agreed on the classification. It should be noted that amenability to placebo action
was operationalized based on the disorder treated, not on the objectivity of the outcome
measure. Despite the research that indicates the existence of demonstrable physiological
effects attributable to placebos (Leuchter et al., 2002; Mayberg et al., 2002; Olfson et al.,
2002), there persists the notion that placebos primarily affect patient self-reports of symp-
toms (typically labeled subjective reports—see Hróbjartsson and Gøtzsche), an assump-
tion that was tested in the present meta-analysis.
Second, whether the design of the study was adequate to estimate the placebo effect
or whether research operations attenuated the placebo effect, was evaluated. A study was
classified by the five raters as being adequate if all of the following conditions were met:
(a) the study was double-blinded, (b) the study participants were aware that they could
receive a placebo and were aware when it was administered (i.e., administration was not
surreptitious), and (c) the treatment and the placebo were indistinguishable (despite the
problem regarding detection via side effects, we considered pill placebos indistinguish-
able from active pills). The design components were rated separately and the design was
determined to be adequate only if each of the components were adequate; agreement on
the final determination of adequacy was unanimous based on agreement of four of the
five raters. In summary, within the groups of continuous and dichotomous outcomes,
studies were classified into six categories by crossing amenability to psychological fac-
tors (three levels) by adequacy of research design (two levels). Psychotherapy studies,
which will be examined more fully in the next section, were classified as amenable to
Placebo Effects in Medicine and Psychotherapy 843
placebo, but because such studies are not double-blinded, their study designs were clas-
sified as not adequate to estimate the placebo effect.
For the studies with continuous outcomes, standard meta-analytic procedures (Hedges
& Olkin, 1985) were used to make two calculations for each of the three effects (viz.,
treatment, placebo, and relative treatment/placebo effects): (a) an estimate of the effect
size di for each comparison, and (b) an estimate of the variance of this estimate, (i.e.,
s[ d2 ). For each designated effect, the difference between the means of the two groups was
calculated and then divided by the pooled standard deviation of the two groups, and then
adjusted to yield an unbiased estimate of the population effect size. To aggregate the
effect, we weighted each study’s di by the inverse of its variance and combined these
weighted effect sizes to yield the aggregated effect size estimate d⫹ for each group of
studies (Hedges & Olkin, 1985). In addition, the standard error of this estimate s[ d⫹ 2
was
calculated and used to calculate the 95% confidence interval.
A similar strategy was used for the dichotomous variables. First, an odds ratio, oi for
the two groups being compared was calculated and then transformed to an approximately
normal distribution by taking the natural logarithm, (i.e., ln oi !. The variance of the
transformed score was calculated and used to construct the 95% confidence intervals.
The scores as well as the endpoints of the confidence intervals were then returned to odds
ratios by applying the inverse of the natural logarithm (Fleiss, 1994).
For both continuous and dichotomous data, the following hypotheses were made:
1. The placebo effect would be detected (a) when the disorder was amenable to the
psychological aspects of the placebo, and (b) when the quality of the research
design was adequate.
2. The placebo effect and the treatment effect would not be statistically different
(a) when the disorder was amenable to the psychological aspects of the placebo,
and (b) when the quality of the research design was adequate.
3. For studies with adequate designs, as the amenability to placebo decreased, the
placebo effect detected would also decrease such that the active treatment would
become more effective relative to the placebo.
4. There would be no difference in effect sizes of placebos for subjective (patient
reported) and objective (data obtained by physiological tests or objective records,
but not subjective ratings of evaluators) measures, when both types of measures
were used within the same study.
Reanalysis Results. The results for the continuous measures and dichotomous mea-
sures are presented in Tables 1 and 2, respectively. Generally, the results were consistent
with the hypotheses and thus demonstrated the existence of the placebo effect.
The first hypothesis was that a placebo effect would be detected when the disorder
was amenable to placebo action and when the quality of the research design was ade-
quate. For such studies with continuous outcome measures, the placebo effect was sta-
tistically larger than zero (viz., d⫹ ⫽ .29, p ⬍ .05). For such studies with dichotomous
outcomes, the aggregated odds ratio was not statistically different from 1.00 (viz., o⫹ ⫽
.99, ns). However, it should be noted that for these studies with dichotomous outcomes,
the data did not support the efficacy of the active treatments either ~o⫹ ⫽ .89, ns)
The second hypothesis was that for this same set of studies (i.e., disorders amenable
to treatment and adequate design), the placebo effect and the treatment effect would not
be statistically different (i.e., there would be no specific treatment effect). For both types
of outcome measures, as hypothesized, specific treatment effects were not statistically
844 Journal of Clinical Psychology, July 2005
Table 1
Effect Sizes for Studies With Continuous Outcome Variables
CI CI CI
Amenability No.
and Design Studies a d⫹ LB UB d⫹ LB UB d⫹ LB UB
Definitely
Adequate 5 .24 .00⫹ .47 .29 .06 .52 ⫺.05 ⫺.29 .18
Not adequate 29 .83 .69 .97 .23 .08 .37 .58 .44 .71
Possibly
Adequate 6 .29 .11 .47 .17 ⫺.01 .36 .19 .01 .37
Not adequate 30 .61 .51 .71 .33 .22 .43 .27 .17 .38
No
Adequate 7 .84 .72 .96 ⫺.03 ⫺.14 .08 .65 .53 .77
Not adequate 2 1.12 .64 1.60 ⫺.11 ⫺.55 .33 1.25 .81 1.70
Note. CI ⫽ Confidence Interval; LB ⫽ Lower Bound; UB ⫽ Upper Bound. Effect sizes are statistically significant if the
confidence interval does not include 0.
a
The total number of studies with continuous variables (⫽ 79) do not equal the original meta-analysis (⫽ 82) because data
could not be obtained from two studies and one study did not have an active treatment group.
significant (viz., for continuous variables, d⫹ ⫽ ⫺.05, ns, and for dichotomous variables
o⫹ ⫽ .89, ns), indicating that placebos produced effects comparable to the treatments.
The third hypothesis was that as the amenability to placebo treatment decreased,
the placebo effect detected would also decrease; in addition, the active treatment would
become more effective relative to the placebo (i.e., there would be an increasing specific
Table 2
Effect Sizes for Studies With Dichotomous Outcome Variables
CI CI CI
Amenability No.
and Design Studies a o⫹ LB UB o⫹ LB UB o⫹ LB UB
Definitely
Adequate 6 .89 .72 1.09 .99 .81 1.23 .89 .73 1.10
Not adequate 6 .67 .43 1.04 .97 .63 1.50 .73 .47 1.11
Possibly
Adequate 4 .84 .45 1.57 .96 .52 1.74 .83 .44 1.58
Not adequate 7 .66 .54 .82 .95 .78 1.17 .71 .57 .88
No
Adequate 4 .85 .57 1.27 .93 .62 1.39 .91 .61 1.37
Not adequate 1 .08 .04 .18 .77 .52 1.14 .11 .05 .22
Note. CI ⫽ Confidence Interval; LB ⫽ Lower Bound; UB ⫽ Upper Bound. Effect sizes are statistically significant if the
confidence interval does not include 1.
a
The total number of studies with dichotomous variables (⫽ 28) do not equal the original meta-analysis (⫽ 32) because data
could not be obtained from one study and three studies did not have active treatment groups.
Placebo Effects in Medicine and Psychotherapy 845
treatment effect). For the studies with continuous measures with adequate research designs,
the size of the placebo effect was related to whether the disorder was amenable to psy-
chological factors (viz., placebo effects for definitely, possibly, or not amenable were
d⫹ ⫽ .29, .17, and ⫺.03, respectively), as predicted. Moreover, when the disorder was
not amenable to placebo action, the treatment was demonstrably superior to the placebo
(i.e., the specific treatment effect was d⫹ ⫽ .65). For the studies with dichotomous
outcomes with adequate designs, analysis of the trend of the placebo effect was not
informative because there was no placebo effect (or treatment effect, for that matter) for
any level of amenability to placebo action.
The fourth hypothesis was that within studies that contained the subjective and objec-
tive measures, there would be no difference in the size of the placebo effect. In all, 19 stud-
ies with continuous data contained both types of measures (there were insufficient
dichotomous studies with both types of measures to conduct an analysis). The differences
in effect sizes were calculated by subtracting the effect size for the objective measure from
the effect size for the subjective measure. The appropriate variance for the difference was
then calculated, using the methods for stochastically dependent effect sizes (Gleser & Olkin,
1994, § 4.1, using a correlation between measures of .5; see Wampold et al., 1997). If
subjective measures are more sensitive to placebos than objective measures, the effect
size for this difference should be statistically greater than zero. In this study, the effect size
for this difference was small and not significant ~d ⫽ .11).
Conclusions. When studies used in the Hróbjartsson and Gøtzsche (2001) meta-
analysis were disaggregated based on the adequacy of the design and the degree to which
the disorder was amenable to psychological factors, evidence for a placebo effect was
indeed found. Specifically, for the adequately controlled studies, effects produced by
placebo treatments of disorders amenable to psychological factors approached the size of
effects produced by treatments.
Hróbjartsson and Gøtzsche concluded that placebos were ineffective except for small
but significant effects in studies with continuous subjective outcomes and for the treat-
ment of pain. Contrary to their findings, we found that relatively large effects were pro-
duced by adequately conducted studies of disorders amenable to placebo effects when
continuous outcomes were used. Effects for objective measures were found to be com-
parable to those for subjective measures. For the studies with dichotomous outcomes, the
ineffectiveness of the active treatments precluded the finding of placebo effects. Thus, it
can be seen that in disorders amenable to placebos (i.e., where it is plausible that they
could provide an effect via the expectations they create), placebo effects were compara-
ble to treatment effects, thereby establishing the existence of placebo effects.
The conclusions about the power of the placebo vis-à-vis no treatment are limited
because there were very few studies (n ⫽ 11) that employed adequate research designs,
involved disorders that theoretically would be amenable to treatment by placebos, and
contain both active treatments and no treatment conditions. Moreover, many studies in
this data set contained treatments that were not effective or were marginally effective;
because the placebo effect theoretically should not exceed treatment effect, the size of the
placebo effect in these studies was thereby restricted. It is clear that the clinical trials
examined by Hróbjartsson and Gøtzsche were not designed to detect placebo effects, yet
when re-aggregated by considering amenability to placebo and adequacy of design, pla-
cebo effects are present and their size approaches the size of treatment effects. The result
that objective measures produce placebo effects comparable to placebo effects obtained
by subjective measures of patients’ self-report is an important finding that indicates that
the placebo effect is not a superficial phenomenon.
846 Journal of Clinical Psychology, July 2005
its own perspective and not from the other (Critelli & Neumann, 1984). When treatments
intended to be therapeutic are compared, there is persuasive evidence that differences
among treatments are small or nonexistent (Lambert, 2004; Wampold, 2001a; Wampold
et al., 1997).2 Thus, when conceptualized in this manner, the placebo appears to be as
effective as the treatment.
There is one issue regarding using another therapy as a placebo that is critical. The
use of this strategy to construct the placebo places theory at the level of the psychother-
apeutic approach. Accordingly, psychodynamic treatment would be a placebo for cognitive-
behavior therapy (CBT) provided the theory involves cognitive explanations for the
disorder. In medicine, the construction of placebos places the theory at the physiological/
anatomical level and does not place explanatory systems at the disorder level. For exam-
ple, diuretics for the treatment of hypertension would not be used as placebo for beta-
blockers because theory is at the level of physiochemical understanding of hypertension
and not at a particular explanation for hypertension (i.e., the goal is to control for non-
physiochemical causes, viz., psychological causes). Analogously, psychodynamic treat-
ment could be considered an inappropriate placebo for CBT because both are psychological
explanations and neither psychodynamic nor cognitive explanations are incidental, using
Grünbaum’s terminology, at the psychological explanatory level.
The second strategy for constructing a placebo is to remove one or more of the
characteristic ingredients without adding anything to the treatment, yielding the disman-
tling design (Borkovec, 1990), which in many ways meets the criteria of experimental
design better than the alternatives. The comparison treatment thus has the ingredients of
the treatment save one or a few critical ones, providing a test of whether the removed
ingredient or ingredients are necessary to produce the benefits provided by the full treat-
ment. Although the two treatments are distinguishable, they are more similar than any
other comparisons found in psychotherapy clinical trials.
A well-known dismantling study was conducted by Jacobson and his colleagues
(Jacobson et al., 1996) who compared cognitive therapy, cognitive therapy without iden-
tification and modification of core schema, and cognitive therapy without identification
and modification of schema and without identification and modification of automatic
thoughts. Essentially, they removed one or two critical cognitive components of cogni-
tive therapy, yielding at the end a treatment based primarily on behavioral activation,
which at the level of cognitive therapy, is incidental to the theory. Jacobson et al. found
that all three groups produced comparable outcomes, suggesting that if the dismantled
groups are considered placebos in the sense that they do not have one or more of the
characteristic ingredients, then the placebos were as effective as the treatment. Ahn and
Wampold (2001) conducted a meta-analysis of dismantling studies and additive studies
(where a critical ingredient is added rather than removed) and found that adding or remov-
ing ingredients that are theoretically purported to be critical did not affect the outcomes
produced. Therefore, when the second strategy for constructing placebos for psychother-
apy is used, the placebo treatments are not demonstrably deficient compared to the
treatment.
The dismantling design as a means to construct a placebo according to Grünbaum’s
definition illustrates some issue with placebos in psychotherapy as well as with Grün-
baum’s conceptualization. Jacobson et al. (1996) found that removing the cognitive com-
ponents of cognitive therapy did not attenuate the efficacy of the treatment. But it could
2
Of course, when two seemingly different treatments are equally effective, then the proponents of one approach
can claim that the other approach implicitly uses their ingredients, as is the case of those who claim that EMDR
works because it essentially is an exposure-based treatment (see Lohr, DeMaio, & McGlynn, 2003).
848 Journal of Clinical Psychology, July 2005
be argued that the dismantled treatment is not a placebo, but rather is a behavioral treat-
ment. However, according to Grünbaum, from the standpoint of cognitive therapy, the
behavioral component is incidental and thus can be contained in the placebo. But on the
face of it, this is ludicrous because cognitive theorists would give credence to the behav-
ioral components (i.e., they would claim that the behavioral components are not inciden-
tal). But the slope is slippery, because these same theorists would recognize that the
therapeutic relationship could not be classified as incidental either. Perhaps a better way
is to use Waltz et al.’s (1993) classification of such elements as essential but not unique
rather than use the dichotomy of characteristic and incidental—the relationship is essen-
tial to conducting cognitive therapy but it is not unique (and indeed, is common). How-
ever, Waltz et al.’s system leads to its own issues when designing psychotherapy placebos,
as will be discussed below.
The third, and typical, strategy for constructing placebos involves using treatments
that control for the common factors and go by names such as alternative treatments,
supportive counseling, non-directive therapy, credible attention placebo, and common
factor control. These control treatments do not have a cogent theoretical rationale (i.e.,
they are not treatments intended to be therapeutic) nor do they have therapeutic actions
consistent with coherent change principles, but typically involve to varying degrees, a
relationship with a trained provider, support, empathic responding, and purported expec-
tation that the treatment will be effective. It is these types of controls that produce ben-
efits that are considerably smaller than the effects produced by active psychotherapies.
The problems with the third strategy are many (see Basham, 1986; Borkovec & Nau,
1972; Brody, 1980; Horvath, 1988; Lambert & Ogles, 2004; O’Leary & Borkovec, 1978;
Sheppard, 1993; Wampold, 1997; see Wampold, 2001a and 2001b for a more thorough
discussion). The first problem is related to the order of the theory. Placebos are purported
to operate through the hope, expectation, remoralization, therapeutic relationship, and
other psychological processes. Yet, if the order of theory is at the level of “psychology”
then these factors are not incidental to the theory; both specific ingredients (e.g., chal-
lenging maladaptive thoughts) and common factors (e.g., hope, remoralization) involve
psychological processes. Furthermore, even if aspects of the placebo are considered to be
incidental, they are incidental in a way that is different from the way in which lactose is
incidental in a pill placebo—in psychotherapy, some of the incidental factors are neces-
sary for the delivery of the treatment and therefore become a perspicuous aspect of the
treatment (i.e., they are essential but not unique). The most obvious example is the ther-
apeutic relationship, which is necessary for the delivery of psychotherapy and the quality
of which is related to the effectiveness of the active ingredients. Nearly all treatment
manuals prominently discuss the relationship with the patient—this is very different from
the lactose in pill placebos that is not considered to have any effect on delivery of the
active ingredient. Therefore, using Waltz et al.’s (1993) designation of essential but not
unique elements raises the question: Should these elements be contained in the placebo or
not? If they are contained, how can one ensure that they are given comparably? A rela-
tionship in the context of a bona-fide therapy, where there is agreement on goals and tasks
(for example, in CBT) is very different from a relationship in the context of an “alterna-
tive” treatment where typically a rationale, goals, tasks, and other activities common to
therapies intended to be therapeutic do not exist. Indeed, a lack of equivalence in the
degree to which common factors are delivered or the quality with which they are deliv-
ered can be invoked as an alternative hypothesis to explain various outcomes in psycho-
therapy research (e.g., see Jacobson, 1991).
A second set of problems is created by the contention that placebos control for the com-
mon factors in psychotherapy. It should be noted that “common factors” is an ambiguous
Placebo Effects in Medicine and Psychotherapy 849
term. The relationship is usually core to any discussion of common factors, but relationship
differs by treatment method—a relationship with Ellis within the context of rational emo-
tive therapy would have been very different than a relationship with Rogers in the context
of client-centered therapy. Furthermore, common factors include much more than a rela-
tionship with a caring professional, and some of these aspects cannot be controlled for by
psychotherapy placebos. As discussed by Frank and Frank (1991; Wampold, 2001b), com-
mon factors include therapeutic rituals (i.e., a set of procedures) consistent with a convinc-
ing rationale based in the healing context and delivered by a healer who believes in the
treatment. Thus, common factor placebos do not contain all factors that are expected to be
included in treatments minus the specific ingredients because having a set of specific ingre-
dients consistent with the explanation provided either explicitly or implicitly to the client
is a common factor (see Wampold, 2001b). In this regard, it should be recognized that these
alternative treatments, stripped of active ingredients, are not simply reductions that result
in an experiential or client-centered therapy; experiential or humanistic practitioners or theo-
rists would reject the notion that these alternative treatments, used to control common fac-
tors, are consistent with how they would administer bona-fide humanistic or experiential
therapies.
The final set of problems with the common factor placebo is centered on the fact that
such placebos are clearly distinguishable from the purportedly active treatment. This
ability to distinguish is troublesome because the therapists in studies using these controls
are aware that they are delivering a treatment not intended to be therapeutic. Given that
allegiance has been shown to be related to outcomes (see Luborsky et al., 1999; Wampold,
2001b) and that most placebo treatments are administered by advocates of the active
treatment or are trained by advocates of the active treatment, as well as the difficulty in
faithfully and enthusiastically administering a treatment known to the provider to be
bogus, most likely attenuates the effectiveness of the placebo treatment.
Essentially, it is difficult to adequately develop a common factor-type control in
psychotherapy research. Over the years, attempts have ranged from those that provide
convincing, but bogus, rationales (see for example, Borkovec and Costello, 1993, for an
excellent attempt) and equal doses of treatment to those without rationales, proscription
of therapist actions generally acknowledged to be therapeutic, and decreased doses of
treatment. For an example of the latter, consider the comparison of supportive psycho-
therapy, designed as a placebo control for common factors, to interpersonal psychother-
apy for the treatment of depressed HIV patients (Markowitz et al., 1995):
Supportive psychotherapy, defined as noninterpersonal psychotherapy and noncognitive-
behavioral therapy, resembles the client-centered therapy of Rogers,3 with added psychoedu-
cation about depression and HIV. Unlike interpersonal psychotherapists, supportive
psychotherapists offered patients no explicit explanatory mechanism for treatment effect and
did not focus treatment on specific themes. Although supportive psychotherapy may have
been hampered by the proscription of interpersonal and cognitive techniques, it was by no
means a lack of treatment, particularly as delivered by empathic, skillful, experienced, and
dedicated therapists. Sixteen 50-minute sessions of interpersonal therapy were scheduled within
a 17-week period. The supportive psychotherapy condition had between eight and 16 ses-
sions, determined by patient need, of 30–50 minute duration (p. 1505).
This brief discussion of the problems with common factor controls in psychotherapy
reveals that such controls have aspects that attenuate their potency vis-à-vis the active
3
It is unclear that the supportive therapy used in this study would resemble the therapy of Rogers, as client-
centered therapy involves more than minimal empathic response.
850 Journal of Clinical Psychology, July 2005
treatment to which it is being compared. These issues include therapists who know they
are delivering a treatment not intended to be therapeutic, no rationales or less convincing
rationales, the lack of specific therapeutic actions consistent with a rationale, proscrip-
tions against various therapeutic actions, and smaller doses of treatment for the placebo
treatment (Baskin et al., 2003; Wampold, 2001b). Although pill placebos are not without
problems (e.g., side effects that provide cues to patients about assignment), they offer
better controls than do the psychotherapy placebos that attempt to control for common
factors. Consequently, it is not surprising to find that pill placebos produce outcomes that
approach that of active medications for depression while psychotherapy placebos pro-
duce outcomes that fall short of active psychotherapies.
Recently, Baskin, Tierney, Minami, and Wampold (2003) attempted to estimate the
effectiveness of common factor-type placebos vis-à-vis generally accepted treatments by
discriminating between those placebos that were structurally equivalent to the active
treatment and those that were not. Structurally equivalent placebos had the same number
and length of sessions as the active treatment, used the same format (e.g., group, family,
individual) as the treatment, used therapists with training comparable to that of therapists
of the active treatment, involved treatments that were individualized to the patient, allowed
patients to discuss topics logical to the treatment, and did not constrain the conversation
to neutral topics. If the placebo did not contain all of these elements, it was classified as
not equivalent. Structurally equivalent and placebos that were not equivalent produced
different sized effects. Results indicated that comparisons between active treatments and
structurally placebos that structurally were not equivalent produced larger effects than
comparisons between active treatments and structurally equivalent placebos; moreover,
the latter comparison produced negligible effects ~d ⫽ .15), indicating that active treat-
ments were not demonstrably superior to well-designed placebos.
Designing a placebo treatment in psychotherapy is difficult. Nevertheless, using var-
ious means to construct placebos, it appears that placebo treatments are nearly as effec-
tive as active treatments provided the design of the placebo is adequate; although this
result may not apply across all disorders as studies using placebo types controls are not
uniformly distributed across disorders.
Conclusions
Two major conclusions follow from our analysis of placebo effects in medicine and
psychotherapy. The first conclusion is that the placebo effect is robust. With regard to
placebo effects in medicine, when disorders were amenable to placebo treatments and the
design of the study was sufficient to detect a placebo effect, the placebo effect was indeed
present and approached the size of treatment effects. Moreover, the placebo effect was as
strong when it was objectively measured as it was when it was subjectively measured. In
psychotherapy, it has been claimed that treatments produce effects that are roughly twice
as large as placebo effects (Lambert & Ogles, 2004; Wampold, 2001b). However, when
psychotherapy placebos are well designed, the placebo effect approaches the treatment
effect, a result consistent with pharmacological treatments of psychological disorders.
The second conclusion is that the notion of placebo in psychotherapy is logically
complex. Grünbaum’s (1981) definition of characteristic and incidental ingredients have
added rigor to conceptualizing a placebo; nevertheless, as discussed here, problematic
issues remain. There are aspects of psychotherapy that do not fit neatly into the charac-
teristic or incidental categories. Moreover, determining whether an ingredient is inciden-
tal requires specification of a theory and the order of the theory; that characteristic and
incidental aspects in psychotherapy belong to the same class when the order of the theory
Placebo Effects in Medicine and Psychotherapy 851
is “psychological explanation” further complicates the problem. Apart from these issues,
in psychotherapy research additional (although related) problems exist because it is not
possible to (a) design a control that is indistinguishable from the active treatment, or (b)
blind the study from the perspective of the therapist. The search for specificity in psy-
chotherapy requires elaboration of Grünbaum’s logic (e.g., Lohr et al., 2003); a healthy
debate exists whether to continue this search or to abandon a medical model of psycho-
therapy and accept an alternative explanation for the benefits of psychotherapy (see
Wampold, 2001b).
Despite the disagreements related to specificity in psychotherapy, the results of clin-
ical trials in psychotherapy and medicine indicate that the placebo is indeed powerful in
situations where it would be expected to operate. It is clear that the beneficial aspects
produced by medicine and psychotherapy involve factors that are not central to respec-
tive modal models or the received view of these endeavors.
References
Ader, R. (1997). The role of conditioning in pharmacotherapy. In A. Harrington (Ed.), The placebo
effect: An interdisciplinary exploration (pp. 138– 65). Cambridge, MA: Harvard University
Press.
Adriaanse, A.H., Kollee, L.A.A., Muytjens, H.L., Nijhuis, J.G., de Haan, A.F.J., & Eskes, T.K.A.B.
(1995). Randomized study of vaginal chlorhexidine disinfections during labor to prevent ver-
tical transmission of group B streptococci. European Journal of Obstetrics, Gynecology, and
Reproductive Biology, 61, 135–141.
Ahn, H., & Wampold, B.E. (2001). A meta-analysis of component studies: Where is the evidence
for the specificity of psychotherapy? Journal of Counseling Psychology, 48, 251–257.
Basham, R.B. (1986). Scientific and practical advantages of comparative design in psychotherapy
outcome research. Journal of Consulting and Clinical Psychology, 54, 88–94.
Baskin, T.W., Tierney, S.C., Minami, T., & Wampold, B.E. (2003). Establishing specificity in psy-
chotherapy: A meta-analysis of structural equivalence of placebo controls. Journal of Consult-
ing and Clinical Psychology, 71, 973–979.
Beecher, H.K. (1955). The powerful placebo. Journal of the American Medical Association, 159,
1602–1606.
Benedetti, F., Maggi, G., Lopiano, L., Lanotte, M., Rainero, I., Vighetti, S., et al. (2003). Open
versus hidden medical treatments: The patient’s knowledge about therapy affects the therapy
outcome. Prevention & Treatment, 6, Article 1. Retrieved June 1, 2004, from http://
journals.apa.org/prevention/volume6/pre0060001a.html
Borkovec, T.D. (1990). Control groups and comparison groups in psychotherapy outcome research.
Washington, DC: U.S. Department of Health and Human Services.
Borkovec, T.D., & Costello, E. (1993). Efficacy of applied relaxation and cognitive-behavioral
therapy in the treatment of generalized anxiety disorder. Journal of Consulting and Clinical
Psychology, 61, 611– 619.
Borkovec, T., & Nau, S.D. (1972). Credibility of analogue therapy rationales. Journal of Behavior
Therapy and Experimental Psychiatry, 3(4), 257–260.
Brody, N. (1980). Placebos and the philosophy of medicine: Clinical, conceptual, and ethical issues.
Chicago: The University of Chicago Press.
Brody, N. (1997). The doctor as therapeutic agent: A placebo effect research agenda. In A. Har-
rington (Ed.), The placebo effect: An interdiscipinary exploration (pp. 77–92). Cambridge,
MA: Harvard University Press.
Critelli, J.W., & Neumann, K.F. (1984). The placebo: Conceptual analysis of a construct in transi-
tion. American Psychologist, 39, 32–39.
852 Journal of Clinical Psychology, July 2005
Darnton, R. (1968). Mesmerism and the end of the Englightenment in France. Cambridge, MA:
Harvard University Press.
Elkin, I., Shea, T., Watkins, J.T., Imber, S.D., Sotsky, S.M., Collins, J.F., et al. (1989). National
Institute of Mental Health treatment of depression collaborative research program: General
effectiveness of treatments. Archives of General Psychiatry, 46, 971–982.
Fleiss, J.L. (1994). Measures of effect size for categorical data. In H. Cooper & L.V. Hedges (Eds.),
The handbook of research synthesis (pp. 245–260). New York: Russell Sage Foundation.
Frank, J.D., & Frank, J.B. (1991). Persuasion and healing: A comparative study of psychotherapy
(3rd ed.). Baltimore: Johns Hopkins University Press.
Gehan, E., & Lemak, N.A. (1994). Statistics in medical research: Developments in clinical trials.
New York: Plenum.
Gleser, L.J., & Olkin, I. (1994). Stochastically dependent effect sizes. In H. Cooper & L.V. Hedges
(Eds.), Handbook of research synthesis (pp. 339–355). New York: Russell Sage Foundation.
Greenberg, R.P., & Fisher, S. (1997). Mood-mending medicines: Probing drug, psychotherapy, and
placebo solutions. In S. Fisher & R.P. Greenberg (Eds.), From placebo to panacea: Putting
psychiatric drugs to the test (pp. 115–172). New York: Wiley.
Grünbaum, A. (1981). The placebo concept. Behaviour Research and Therapy, 19, 157–167.
Hedges, L.V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego: Academic Press.
Hollon, S.D., DeRubeis, R.J., Evans, M.D., Wiemer, M.J., Garvey, M.J., Grove, W.M., et al. (1992).
Cognitive therapy and pharmacotherapy for depression: Singly and in combination. Archives
of General Psychiatry, 49, 774–781.
Horvath, P. (1988). Placebos and common factors in two decades of psychotherapy research. Psy-
chological Bulletin, 104, 214–225.
Hróbjartsson, A., & Gøtzsche, P.C. (2001). Is the placebo powerless? An analysis of clinical trials
comparing placebo with no treatment. The New England Journal of Medicine, 344(21),
1594–1602.
Jacobson, N.S. (1991). Behavioral versus insight-oriented martial therapy: Labels can be mislead-
ing. Journal of Consulting and Clinical Psychology, 59, 142–145.
Jacobson, N.S., Dobson, K.S., Truax, P.A., Addis, M.E., Koerner, K., Gollan, J.K., et al. (1996). A
component analysis of cognitive-behavioral treatment for depression. Journal of Consulting
and Clinical Psychology, 64, 295–304.
Kirsch, I. (1997). Specifying nonspecifics: Psychological mechanisms of placebo effects. In A.
Harrington (Ed.), The placebo effect: An interdisciplinary exploration (pp. 166–186). Cam-
bridge, MA: Harvard University Press.
Kirsch, I. (2002a). Are drug and placebo effects in depression additive? Biological Psychiatry, 47,
733–735.
Kirsch, I. (2002b). Yes, there is a placebo effect, but is there a powerful antidepressant drug effect?
Prevention & Treatment, 5, Article 22. Retrieved June 1, 2004, from http://journals.apa.org/
prevention/volume6/pre0060001a.html
Kirsch, I., Moore, T.J., Scoboria, A., & Nicholls, S.S. (2002). The emperor’s new drugs: An analysis
of antidepressant medication data submitted to the U.S. Food and Drug Administration. Pre-
vention & Treatment, 5, Article 23. Retrieved June 1, 2004, from http://journals.apa.org/
prevention/volume6/pre0060001a.html
Kirsch, I., & Sapirstein, G. (1998). Listening to Prozac but hearing placebo: A meta-analysis
of antidepressant medication. Prevention & Treatment, 1, Article 0002a. Retrieved June 1,
2004, from http://journals.apa.org/prevention/volume6/pre0060001a.html
Kirsch, I., & Scoboria, A. (2001). Apples, oranges, and placebos: Heterogeneity in a meta-analysis
of placebo effects. Advances in Mind-Body Medicine, 17(4), 307–309.
Kirsch, I., Scoboria, A., & Moore, T.J. (2002). Antidepressants and placebos: Secrets, revelations,
and unanswered questions. Prevention & Treatment, 5, Article 33. Retrieved June 1, 2004,
from http://journals.apa.org/prevention/volume6/pre0060001a.html
Placebo Effects in Medicine and Psychotherapy 853
Lambert, M.J., & Ogles, B.M. (2004). The efficacy and effectiveness of psychotherapy. In M.J.
Lambert (Ed.), Handbook of psychotherapy and behavior change (5th ed., pp. 139–193). New
York: Wiley.
Leuchter, A.F., Cook, I.A., Witte, E.A., Morgan, M., & Abrams, M. (2002). Change in brain func-
tion of depressed subjects during treatment with placebo. American Journal of Psychiatry,
159, 122–129.
Lohr, J.M., DeMaio, C., & McGlynn, F.D. (2003). Specific and nonspecific treatment factors in the
experimental analysis of behavioral treatment efficacy. Behavior Modification, 27, 322–368.
Luborsky, L., Diguer, L., Seligman, D.A., Rosenthal, R., Krause, E.D., Johnson, S., et al. (1999).
The researcher’s own therapy allegiances: A “wild card” in comparisons of treatment efficacy.
Clinical Psychology: Science and Practice, 6(1), 95–106.
Markowitz, J.C., Klerman, G.L., Clougherty, K.F., Spielman, L.A., Jacobsberg, L.B., Fishman, B.,
et al. (1995). Individual psychotherapies for depressed HIV-positive patients. American Jour-
nal of Psychiatry, 152, 1504–1509.
Mayberg, H.S., Silva, J.A., Brannan, S.K., Tekell, J.L., Mahurin, R.K., McGinnis, S., et al. (2002).
The functional meuroanatomy of the placebo effect. American Journal of Psychiatry, 159,
728–737.
McNally, R.J. (1999). EMDR and Mesmerism: A comparative historical analysis. Journal of Anx-
iety Disorders, 13, 225–236.
Moerman, D.E. (2002). “The loaves and the fishes”: A comment on the emperor’s new drugs: An
analysis of antidepressant medication data submitted to the U.S. Food and Drug Administra-
tion. Prevention & Treatment, 5, Article 29. Retrieved June 1, 2004, from http://journals.apa.org/
prevention/volume6/pre0060001a.html
Moerman, D.E., & Jones, W.B. (2002). Deconstructing the placebo effect and finding the meaning
response. Annals of Internal Medicine, 136, 471– 476.
Moncrieff, J., Wessely, S., & Hardy, R. (2004). Active placebos versus antidepressants for depres-
sion. The Cochrane Database of Systematic Reviews 2004, Issue 1, Art. No.: CD003012.pub2.
Morris, D.B. (1997). Placebo, pain, and belief: A biocultural model. In A. Harrington (Ed.), The
placebo effect: An interdisciplinary exploration (pp. 187–207). Cambridge, MA: Harvard Uni-
versity Press.
O’Leary, K.D., & Borkovec, T.D. (1978). Conceptual, methodological, and ethical problems of
placebo groups in psychotherapy research. American Psychologist, 33(9), 821–830.
Olfson, M., Marcus, S.C., Druss, B., Elinson, L., Tanielian, T., & Rincus, H.A. (2002). National
trends in the outpatient treatment of depression. Journal of the American Medical Association,
287, 203–209.
Papakostas, Y.G., & Daras, M.D. (2001). Placebos, placebo effect, and the response to the healing
situation: The evolution of a concept. Epilepsia, 42, 1614–1625.
Price, D.D., & Fields, H.L. (1997). The contribution of desire and expectation to placebo analgesia:
Implications for new research strategies. In A. Harrington (Ed.), The placebo effect: An inter-
disciplinary exploration (pp. 117–138). Cambridge, MA: Harvard University Press.
Rosenthal, D., & Frank, J.D. (1956). Psychotherapy and the placebo effect. Psychological Bulletin,
53, 294–302.
Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth
century. New York: Henry Holt.
Sanders, G.E., Reinert, O., Tepe, R., & Maloney, P. (1990). Chiropractic adjustive manipulation on
subjects with acute low back pain’ visual analog pain scores and plasma–endorphin levels.
Journal of Manipulative and Physiological Therapeutics, 13, 391–395.
Shapiro, A.K., & Shapiro, E.S. (1997). The powerful placebo: From ancient priest to modern
medicine. Baltimore: Johns Hopkins University Press.
Shepherd, M. (1993). The placebo: From specificity to the non-specific and back. Psychological
Medicine, 23(3), 569–578.
854 Journal of Clinical Psychology, July 2005
Stevens, S.E., Hynan, M.T., & Allen, M. (2000). A meta-analysis of common factor and specific
treatment effects across domains of the phase model of psychotherapy. Clinical Psychology:
Science and Practice, 7, 273–290.
Stewart-Williams, S., & Podd, J. (2004). The placebo effect: Dissolving the expectancy versus
conditioning debate. Psychological Bulletin, 130, 324–340.
Walach, H. (2003). Placebo and placebo effects—A concise review. Focus on Alternative and
Complementary Therapies, 8(2), 178–187.
Walach, H., & Maidhof, C. (1999). Is the placebo effect dependent on time? A meta-analysis. In I.
Kirsch (Ed.), How expectancies shape experience (pp. 321–332). Washington, DC: American
Psychological Association.
Waltz, J., Addis, M.E., Koerner, K., & Jacobson, N.S. (1993). Testing the integrity of a psychother-
apy protocol: Assessment of adherence and competence. Journal of Consulting and Clinical
Psychology, 61, 620– 630.
Wampold, B.E. (1997). Methodological problems in identifying efficacious psychotherapies. Psy-
chotherapy Research, 7, 21– 43.
Wampold, B.E. (2001a). Contextualizing psychotherapy as a healing practice: Culture, history, and
methods. Applied and Preventive Psychology, 10, 69–86.
Wampold, B.E. (2001b). The great psychotherapy debate: Model, methods, and findings. Mahwah,
NJ: Erlbaum.
Wampold, B.E., Mondin, G.W., Moody, M., Stich, F., Benson, K., & Ahn, H. (1997). A meta-
analysis of outcome studies comparing bona fide psychotherapies: Empirically, “all must have
prizes.” Psychological Bulletin, 122, 203–215.