Silverman 2005
Silverman 2005
Dynamically oriented psychotherapy has only infrequently been tested to demonstrate its
effectiveness when compared with other kinds of therapeutic interventions. Of course, this
does not mean that it lacks effectiveness; rather there has been reduced commitment to
empirical testing, probably owing to insufficient support from the therapeutic–analytic
community. This is unfortunate. Given the effectiveness of some demonstrated forms of
therapeutic intervention, there should be wide support for empirical testing in our
community. This might well reverse the reliance on medicine rather than psychotherapy
as the treatment of choice for a host of problems. However, for the most part, those trained
as psychoanalysts have not adopted this orientation. It has been considered unnecessary
or, even worse, scientism. Beutler (2000a) suggested a reason for the lack of flexibility in
our views. He noted,
Certainly psychotherapy—and mental health more broadly—is a field in which the belief in
clinical observation and the creative expression of the theorists has always been valued more
306
BRIEF REPORT 307
highly than results from scientific method, nomothetic research and statistical analysis. Garb
(1998), for example, observed that when scientific and personal beliefs contradict, clinicians
are highly prone to accept the latter and discount the former. Thus, there has always been and
continues to be significant distrust of those who would question, let alone test, the validity of
the great theories and practices of the masters. . . . In reviewing this history, it is apparent that
among any group of like-minded practitioners, the standard of evidence for validity of a
clinical practice has often been whether it fits the theory, whether the advocates strongly
believe in the truth of their theories, and whether they appear to be sincere in advocating the
value of clinical experience in support of their beliefs. (p. 2)
Can we, as dynamically oriented clinicians, begin to rethink our positions with regard to
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
our view of what constitutes clinical practice? This is important for our field, not only in
This document is copyrighted by the American Psychological Association or one of its allied publishers.
terms of intellectual rigor but also in that monies for the selection of future candidates for
training in psychology; the development of the clinical curriculum; training in assessment,
diagnosis, and prediction; accreditation decisions; continuing education sponsorship;
provisions for research grants; National Institute of Mental Health awards and fellow-
ships; funding from local and state governments; insurance reimbursements; and the like
will be responsive to this broadened definition of therapeutic practice.
Prior to evidence-based practice, in physical medicine, for example, interventions
were based on some experimental work but mainly with limited knowledge, belief
systems, and anecdotal views of what worked. Thus, a belief system dominated the use of
bloodletting to relieve fever. It killed millions of patients and it took centuries to overturn
(Barlow, 2004). Even more dramatic is an instance of experimental work that was ignored
at the peril of loss of life. This is a story dating to the 1600s, when a British naval officer
concerned about the high death rate that occurred on long voyages conducted a controlled
study using three different ships at sea. On one ship, sailors were given three teaspoons
of lemon juice daily in addition to the regular restricted diet provided to sailors at sea.
Halfway through the voyage, 110 out of 278 sailors on their usual diet were dead; in
contrast, none of the sailors who received the lemon juice died. It took 264 years before
regular vitamin C was part of the diet of sailors on sea trips (Barlow, 2004).
Currently we rely on evidence-based medicine, and it has proven its usefulness. In
medicine, the guidelines now suggested are those that reflect “conscientious, explicit and
judicious use of current best evidence in making decisions about the care of individual
patients” (Sackett, Rosenberg, & Gray, 1996). Such an approach has provided procedures
for informed decisions about efficacious treatment for various disease entities. Such
information needs to be assimilated and then integrated with all of the relevant informa-
tion known, as well as an understanding of the potential consequences for the individual
patient. The system is far from perfect, and currently we know much more about the
power, influence, and potential shaping features of pharmaceutical companies in influ-
encing clinical decisions. Nonetheless, if used properly, it can be an effective and valid
approach in guiding the course of decision making.
Evidence-based research has also been applied to various areas of the mental health
field (Drake et al., 2001). We now have sufficient scientific evidence to claim that
psychotherapy is safe and effective, with lasting results for a large number of patients,
across a wide range of problems (Luborsky, Singer, & Luborsky, 1975; Smith & Glass,
1977; Wampold et al., 1997). There is research addressing what key variables should be
attended to in helping seriously maladapted patients such as schizophrenics. Research has
demonstrated the therapeutic effectiveness of psychotherapy at its termination and in
follow-up study, when contrasted with medication or alternative treatments, in several
areas: incontinence in women, insomnia, delay of institutionalization of patients with
308 BRIEF REPORT
Alzheimer’s disease, depression, and persistent pain, fatigue, and cognitive symptoms
found in veterans of the Gulf War. Chronic depression is most successfully treated with
medication and psychotherapy, and this combination is far more effective than either
drugs alone or a placebo. Like the effectiveness of clinician’s expertise in psychotherapy,
experience and excellence in pharmacology is relevant. Knowledge and familiarity with
assessment, dosage, and monitoring both for effectiveness and for adverse effects are
essential, as is the recognition that drug termination, as compared with psychotherapeutic
interventions, often lead to a recurrence of symptoms (Hollon, 1996; Lambert, 2004).
Research has attended to the kind of useful services that might be offered to children with
severe developmental disorders, and it has helped formulate and establish a variety of
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
important social policy decisions that have emerged on the basis of this orientation. Here
This document is copyrighted by the American Psychological Association or one of its allied publishers.
I have captured only a small aspect of this beneficial approach. I would think that few
analysts are aware of the various task forces that have existed to study particular issues
and their resulting documents. Such task force documents are sponsored and produced at
the national (e.g., American Psychological Association), division, and state level. For
example, Division 29 of the American Psychological Association (Norcross, 2001)
produced a summary report of empirically supported therapy relationships. A Hawaii task
force has evaluated mental health services for children and has provided summary reviews
of the best treatments available for various childhood disorders (Chorpita et al., 2002).
Thorough, careful, systematic investigation of outcome research in psychotherapy has
long since lain to rest Eysenck’s questions about the effectiveness of psychotherapies
(Orlinsky, Ronnestad, & Willutski, 2004). Meta-analyses have also allowed for the
establishment of an effect size, so that the magnitude of the association between thera-
peutic intervention studies as compared with control studies can be evaluated. For
example, a number of decades ago, Smith and Glass (1977) reviewed hundreds of
psychotherapy outcome studies and reported that the average person receiving therapy
fared better than 75% of untreated controls, with a demonstrably large effect size.
Eighty-three percent of those treated were better off than controls, especially for the
alleviation of anxiety and the improvement of self-esteem. Such findings are consistently
maintained across a wide range of disorders, excluding such biologically based distur-
bances as bipolar disorder and the schizophrenias, where psychoactive medications are
primary. Recently Lambert (2004), summarizing research on the effectiveness of psycho-
therapy, reported,
For the most part, psychological interventions surpass the effects of medication for psycho-
logical disorders and should be offered prior to medications (except with the most severely
disturbed patients) because they are less dangerous and less intrusive, or at the very least, in
addition to medications because they reduce the likelihood of relapse once medication is
withdrawn (Elkin, 1994; Thase, 1999).
In addition, researchers are addressing the process of therapy and reporting that the
therapist, and especially one who is experienced and flexible in responsiveness, is a central
element of change and that the therapeutic relationship accounts for most of the outcome
variance (Norcross, 2000; Orlinsky, Grawe, & Parks, 1994). This is such a pronounced
finding that nondynamic treatments have incorporated the concept of transference, al-
though not using this language, when developing their more cognitively oriented tech-
niques (Henry, 1998). Even in an exclusive physician–patient relationship, those doctors
who are involved with their patients, demonstrating rapport and responsiveness, offer the
increased likelihood that their patients will take their medication and will respond
BRIEF REPORT 309
positively to it. Across different kinds of therapies, technique accounts for 12%–15% of
the variance (Lambert, 1992; Norcross, 2000). Follow-up data for patients engaged in the
therapeutic process for both mental and physical ailments demonstrate that few people
suffer a relapse after psychotherapy as compared with those only on medication (Hollon,
1996; Hollon, Thase, & Markowitz, 2002). Evidence now shows that psychotherapy is
cost effective. Not only does it demonstrate an improvement in general well-being, but it
also shows an increased effectiveness in work performance, fewer loss-of-work days, and
less use of medical services. Such information would be helpful to clinicians as well as to
the public at large and to reimbursement agencies (Leuzinger-Bohleber, Stuhr, Rüger, &
Beutel, 2003; Sandell et al., 2000; Yates, 1995).
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
searchers have developed manuals for particular mental disorders and, with randomized
controlled trials, have tested them against control groups and have found their clinical
approach helpful in curtailing particular symptoms. They now have become advocates of
this approach, challenging clinicians to abandon therapeutic approaches that have not been
empirically tested. Some, according to Bohart (2003), have gone so far as to suggest that
the nonuse of this approach amounts to unethical practice.
Manualized specific intervention has serious limitations (see Westen, Novotny, &
Thompson-Brenner, 2004, for a thorough review), although its practitioners have labeled
it the “gold standard” for psychotherapy research. Participants are carefully screened so
that they demonstrate one variable, the particular symptom under study. Such constancy,
of course, is a useful research tool in that it permits the establishment of causal inferences
(use of this manualized substitute testing will produce a clear outcome). Bohart (2000), a
critic of this approach, suggests it is too closely based on a medical model, analogous to
the quest for establishing a specific treatment (identified by the purity of the drug) that will
allow for specific outcome effects. A pure ingredient (an interpretation, for example)
cannot be assumed in psychotherapy, because it always exists in the context of the
therapeutic relationship. Wampold (2001) stated, “a cognitive– behavioral advocate is
interested in how cognitive schemas are altered and how this alteration is beneficial and
is relatively uninterested with incidental aspects, such as the therapeutic relationships and
their effects” (p. 16). He has carefully studied the trend toward manualization in psycho-
therapy research, and he contends that behavioral and cognitive– behavioral treatments
and their more limited, specific focus are more easily put into manual form than
psychodynamic treatments. The trials are short, typically 6 to 12 sessions, with no
follow-up over a period of time to ensure remission. The approach has little to do with the
patients who clinicians usually see in treatment, that is, those with multiple symptoms,
maladaptive behaviors, and serious character disorders—the messy world of real patients.
What is overlooked in a manualized treatment approach is that even if the technique or
type of therapy is a significant source of change, it eliminates from consideration the
important fact of variability in efficacy of individual therapists within the same treatment
condition. Some therapists are effective; others are less so (Crits-Christoph & Mintz,
1991). As the above suggests, experimentalists using this approach rely on the premise
that it is specific treatments rather than common factors such as the relationship or the
therapist that are the key to successful outcome. In addition, the use of a manual for each
specific type of disorder would require the mastery of hundreds of manuals, an untenable
prospect for therapists (Beutler, 2000b).
Although such randomized clinical trials with single-symptom populations may be
efficacious for particular groups of patients (e.g., those with anxiety and panic disorders
[Barlow, 2002], agoraphobia, and smoking and drinking problems [Atkins & Di Guiseppi,
310 BRIEF REPORT
1998]), clinicians and researchers recognize its lack of ecological validity, and thus
advocates of manualized treatment have offered a challenge to the rest of us. It has become
a wakeup call for clinicians. The president of APA, Ronald Levant, has set up a task force
to study this issue and to provide a more appropriate definition for evidence-based
practice. The CEO of APA, Norman Anderson, is both a supporter of and participant in
this task force. This group is composed of psychotherapy researchers and includes a
handful of clinicians as well. I am one of the participants of this group of 20 psychologists.
We have met in Washington, DC, for an extended weekend and will meet in a follow-up
session with the aim of considering the knowledge areas that inform clinical work and
developing recommendations that, with input from governance boards, committees, divi-
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
sions, and the states, will ultimately be synthesized into a statement that APA can endorse
This document is copyrighted by the American Psychological Association or one of its allied publishers.
and disseminate.
As part of the thinking of this task force, we take for granted that as psychologists we
employ a scientist–practitioner model and that we use empirical research and scholarship
to inform our clinical work. We have a sizable body of evidence based on different
research methods (e.g., effectiveness studies, process outcome studies, systematic case
studies, and single-participant experimental designs). We value our clinical expertise,
which includes our experience, judgment, relational skills, critical thinking and listening,
assessment, and decision making, as well as our flexibility, our ability to engage in a fluid
dyadic field that is often ambiguous, and our capacity to vary our interventions when
appropriate and to monitor progress. We consider, as well, the unique personality of the
patient, including his or her special psychological problems, social characteristics, and
cultural context, as well as the individual’s goals and values. Our current task force is
using these features of our understanding of clinical practice to refine our definition so that
it reflects our current knowledge and judgment about improving the health and well-being
of the public.
References
Atkins, D., & Di Guiseppi, C. (1998). Broadening the evidence base for evidence-based guidelines:
A research agenda based on the work of the U.S. Preventive Services Task Force. American
Journal of Preventive Medicine, 14, 335–344.
Barlow, D. (2002). Anxiety and its disorders: The nature and treatment of anxiety and panic (2nd
ed.). New York: Guilford Press.
Barlow, D. (2004). Psychological treatments. American Psychologist, 59, 869 – 878.
Beutler, L. E. (2000a, September 1). Empirically based decision making in clinical practice.
Prevention & Treatment, 3, Article 27.
Beutler, L. E. (2000b, September 1). Empirically based decisions: A comment. Prevention &
Treatment, 3, Article 31.
Bohart, A. C. (2000). Paradigm clash: Empirically supported treatments versus empirically sup-
ported psychotherapy practice. Psychotherapy Research, 10, 488 – 493.
Bohart, A. C. (2003, August). Evidence-based psychotherapy means evidence-informed, not evi-
dence-driven. Paper presented at the 111th Annual Convention of the American Psychological
Association, Toronto, Ontario, Canada.
Chorpita, B. F., Yim, L. M., Donkervoet, J. C., Arensdorf, A., Amundsen, M. J., McGee, C., et al.
(2002). Toward large-scale implementation of empirically supported treatments for children: A
review and observations by the Hawaii Empirical Basis to Services Task Force. Clinical
Psychology: Science and Practice, 9, 165–190.
Crits-Christoph, P., & Mintz, J. (1991). Implications of therapist effects for the design and analysis
BRIEF REPORT 311
Henry, W. P. (1998). Science, politics, and the politics of science: The use and misuse of empirically
This document is copyrighted by the American Psychological Association or one of its allied publishers.
meta-analysis of outcome studies comparing bona fide psychotherapies: Empirically, “all must
have prizes.” Psychological Bulletin, 130, 631– 663.
Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically
supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials.
Psychological Bulletin, 130, 631– 663.
Yates, B. T. (1995). Cost-effectiveness analysis, cost-benefit analysis, and beyond: Evolving models
for the scientist–manager–practitioner. Clinical Psychology: Science & Practice, 2, 385–398.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
This document is copyrighted by the American Psychological Association or one of its allied publishers.