0% found this document useful (0 votes)
274 views39 pages

Miller2E ch10 PDF

Uploaded by

Arthur Miller
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
274 views39 pages

Miller2E ch10 PDF

Uploaded by

Arthur Miller
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

From The Social Psychology of Good and Evil, Second Edition. Edited by Arthur G. Miller.

Copyright © 2016 The Guilford Press. All rights reserved.

Chapter 10

Why Are the Milgram


Obedience Experiments Still
So Extraordinarily Famous—
and Controversial?
Arthur G. Miller

S ocial psychology textbook and journal authors, in introducing


the subject of obedience, invariably describe the Milgram experiments as
famous and controversial (emphasis added in all):

“Understanding when and why people obeyed authority, even if doing


so harmed others, became Milgram’s main concern, and his find-
ings became perhaps the best known and controversial in social
psychology” (Smith, Mackie, & Claypool, 2015, p. 371).
“Milgram did not realize it at the time . . . but they were about to make
history in one of the most famous psychology experiments ever
conducted” (Kassin, Fein, & Markus, 2014, p. 279).
“For half a century, the findings from Stanley Milgram’s obedience
studies have been among the most intriguing and widely discussed
data ever to come out of a psychology lab” (Burger, 2014, p. 489).
“Just over 50 years ago, in July, 1961, Stanley Milgram embarked on
what were to become the most famous studies in social psychology,
if not the discipline of psychology as a whole” (Reicher, Haslam, &
Smith, 2012, p. 315).
185
186 HARMING OTHERS

Historically, the Milgram experiments are often tied to another group


of “classic studies” in social psychology, by Sherif, Asch, Darley and
Latané, Rosenhan, and Zimbardo (Smith & Haslam, 2012; Tavris, 2014).
All of them, conducted within approximately a decade of one another, were
conceptualized in strongly situationist terms, a conceptual orientation that
Ross, Lepper, and Ward (2010) have described as “a major feature of our
field, and with that perspective, the implicit suggestion that stable person-
ality traits or dispositions matter less than lay observers assume, or at least
that they can be outweighed by particular features or manipulations of the
immediate situation at hand” (p. 5). As will be seen, however, this concep-
tualization has not been to everyone’s liking, both within and outside of
social psychology.
The enduring fame and controversiality of the Milgram obedience
experiments are intriguing phenomena. This longevity is particularly
remarkable given their long-acknowledged shortcomings—that is, as being
unethical, as subject to demand characteristics preventing the perception of
real harm occurring, as lacking a convincing theoretical explanation, and
as producing little follow-up research. Burger’s study (2009) was the first
significant behavioral experiment on harmful obedience in over 20 years.
One might have reasonably expected a gradual decline of preoccupation
with the obedience experiments, their being regarded more for historical
interest and as lacking contemporary relevance. This did not occur. The
prestige of the obedience experiments understandably has long been impor-
tant for the image of social psychology itself (e.g., Benjamin & Simpson,
2009; Ross, Lepper, & Ward, 2010), but the “fame” (or “infamy”) label
can present difficulties in studying them objectively. Griggs and Whitehead
(2015), in a content analysis of the most recent social psychology texts,
have argued that the strongest criticisms of the Milgram experiments (e.g.,
Nicholson, 2011; Perry, 2013, as well as earlier criticisms) have had diffi-
culty in receiving a fair hearing, if any at all. This bias reflects the power of
what they term the standard “Milgram-friendly” view of the experiments
as well as textbook authors’ reluctance to disrupt the classic stature and
widely viewed importance of the experiments by including criticisms “that
would muddle the story line . . . , resulting in student confusion and dis-
engagement” (p. 319). The authors make a similar argument with respect
to Zimbardo’s prison study (Griggs & Whitehead, 2014). Thus the Mil-
gram experiments unquestionably are widely regarded as famous. Whether
they should be so regarded is a different issue. A goal of this chapter is to
put readers in an informed position to answer or to seek information and
reconsider this question for themselves.
Three significant developments occurring within the past decade are
discussed. First, several new experimental studies on harmful obedience
have been published. Second, there have been renewed efforts to develop
convincing theoretical explanations for Milgram’s findings. Third, there
The Milgram Obedience Experiments 187

have been revisions in the generalization of these studies to the Holocaust.


There have also appeared new research and criticisms based on docu-
ments in the Milgram archives at Yale University. This chapter also dis-
cusses a number of core issues not specific to the obedience experiments
per se but that play a central role in the manner in which these studies have
been embraced by many social scientists and yet rejected by others. These
include the ideological tensions between situationist and other perspec-
tives, the question of whether explanations exonerate perpetrators, social
psychology’s presumption of the good nature of most people, and the role
of free will and the assignment of moral responsibility and punishment to
perpetrators. These issues are considered after a discussion of the fame of
these studies and a brief synopsis of Milgram’s original findings and inter-
pretation, as well as a number of more recent experiments and theoretical
developments on harmful obedience.

The Fame and Controversy


of the Obedience Experiments
Their Visibility and Prominence
If one defines academic fame (or infamy) in terms of visibility and promi-
nence, the Milgram experiments are likely unsurpassed in the social sci-
ences. Consider the recent presence of these studies in research and schol-
arship: (1) in virtually every introductory and social psychology textbook,
frequently with numerous pages dealing exclusively with the experiments
(e.g., Kassin et al., 2014, pp. 278–288; Smith et al., 2015, pp. 370–382); (2)
in books about the experiments (Blass, 2000; Lunt, 2009; Perry, 2013; (3)
in a biography of Milgram (Blass, 2004); (4) in book chapters (e.g., Miller,
2004; Reicher & Haslam, 2012); (5) in a special issue of Theoretical and
Applied Ethics (Herrera, 2013) devoted to a renewed ethical critique of the
obedience experiments; (6) in a special issue of Theory & Psychology (Bran-
nigan, Nicholson, & Cherry, 2015) containing highly critical evaluations
of the conduct and meaning of the experiments; and (7) in an issue of the
Journal of Social Issues marking the 50th year since Milgram’s first publi-
cation of his findings (Haslam, Miller, & Reicher, 2014). Conferences and
symposia on the obedience research have also been held. Millard (2014),
using archival material, has written an informative, somewhat critical anal-
ysis of Milgram’s film Obedience (1965) in contributing to the continuing
fame of the studies. A new film, Experimenter: The Stanley Milgram Story:
Illusion Sets the Stage, Deception Reveals the Truth, directed by Michael
Almereyda, premiered at the New York Film Festival on October 6, 2015,
and was released in U.S. theaters on October 16 (http://www.imdb.com/
title/tt3726704/). Illustrative reviews may be found at Dargis (2015) and
http://www.ioncinema.com/reviews/experimenter-review.
188 HARMING OTHERS

New experiments on obedience (e.g., Beauvois, Courbet, & Oberle,


2012; Burger, 2009) have been published, in addition to a theoretical model
(Reicher & Haslam, 2011; Haslam, Reicher, & Birney, 2016), and numer-
ous commentaries in social psychology and in other disciplines as well—in
business ethics (e.g., Werhane, Hartman, Archer, Englehardt, & Pritchard,
2013), political science (e.g., Wolfe, 2011), history (e.g., Overy, 2014; Val-
entino, 2004), the law (e.g., Hanson & Yosifon, 2003; Perlman, 2007), and
philosophy (e.g., Card, 2005). In March 2014, Google Scholar listed Mil-
gram’s first (1963) obedience publication as having more than 3,000 cita-
tions, and his 1974 book close to 7,000. The rate of citations is rising (e.g.,
Reicher, Haslam, & Miller, 2014). Websites that feature rankings of the
most famous social psychologists invariably include Milgram (e.g., www.
socialpsychology.org/social-figures.htm).

The Ethical Controversy


The ethical criticisms have unquestionably played a vital if ironic role in the
fame of the obedience experiments, starting at the very beginning (Baum-
rind, 1964; Blass, 2004). One could have expected that a study judged as
unethical would gradually have lost its credibility, resulting in its being
marginalized from the usual academic venues. The opposite, in fact, seems
to have occurred. The value of the research, for some, has been so imposing
as to render the ethical issues to a somewhat lesser role, as they in fact had
at the time Milgram conducted the experiments (e.g., Benjamin & Simp-
son, 2009; Elms, 2009). Others, however, have viewed the deficient ethics
of the experiments (and of Milgram himself) as negating their value, past
and present. The discovery of relevant documents in the Yale archives have
given investigators the opportunity for new ethical criticisms. For example,
Nicholson (2011) remarked: “For contemporary social psychologists, Mil-
gram is a poor role model and the obedience research is a dark chapter,
instructive not for its intellectual merit, but as a cautionary tale in scientific
excess and hubris” (p. 755). Perry (2013) commented: “I have concluded
that evidence from Milgram’s unpublished papers and original recordings
and transcripts casts doubt on Milgram’s reliability as a narrator of the
obedience research and of his role in safeguarding the welfare of his sub-
jects” (p. 90). The precise target of these very recent ethical critiques (see
also Baumrind, 2014; Herrera, 2013) is unclear. It cannot be Milgram, who
died in 1984, nor the more recent obedience studies, which have received
ethical approval from their departmental review boards. These current
ethical criticisms may, then, be aimed at the very high regard in which
Milgram’s experiments are held by so many others, in academic as well as
other contexts. The persistent fame of the experiments is likely to be very
disturbing to current critics.
In their analysis of research ethics, Sieber and Tolich (2013) acknowl-
edge that Milgram was studying an issue of great importance: “We defend
The Milgram Obedience Experiments 189

Milgram scientifically . . . recognizing his great importance to understand-


ing our moral failings as humans” (p. 53). They note that Milgram (in the
late 1950s/early 1960s) was operating largely in an ethical vacuum without
formal guidelines for protection of participants and that Milgram did in
fact use a detailed debriefing (Milgram, 1964), described as “ahead of his
time” (p. 56). Although the use of deception is not, in principle, considered
unethical by Sieber and Tolich, they are critical of Milgram’s use of decep-
tions to achieve his goal of pressuring individuals to behave counter to their
moral principles. Milgram’s major ethical violation, by today’s guidelines,
was his failure to respect his participants’ right to withdraw: “Informed
consent is essentially about ensuring the autonomy of persons throughout
the research process” (Sieber & Tolich, p. 139). Thus a difficult question
faces the researcher: Can the experimental study of destructive obedience
be studied ethically? The answer was, for several decades, “no.” As will be
seen, it now appears to be “yes.”

The Obedience Experiments and the Holocaust


Milgram generalized his findings to the Holocaust in the opening para-
graph of his first publication (Milgram, 1963, p. 371), an association that
unquestionably would soon contribute to the fame and controversiality of
the studies both for Milgram and the field of social psychology itself (e.g.,
Miller, 2004). The concepts of the pathological “Nazi mind” and authori-
tarian personality were the prevailing understanding of perpetrators when
Milgram initiated the obedience project. Milgram’s findings were strik-
ingly different, that is, that ordinary people, under certain conditions of
authorization, would be willing to inflict unexpectedly high levels of intol-
erable pain on a protesting victim. Social psychologists have widely (but not
universally) endorsed the association. Ross et al. (2010) noted, for example,
that “fully appreciating the implication of these studies . . . reduces the
surprise experienced on learning that most of the low-level perpetrators
of the Holocaust . . . were ordinary people who lived unexceptional lives
before and after their infamous deeds rather than self-selected psychopaths
and sadists” (p. 10). Similarly, Holocaust historian Richard Overy (2014)
has observed that “Milgram’s influence has expanded as historians look
for scientific corroboration of the argument that ‘normal men’ can perform
horrific acts without necessarily imbibing a hate-filled ideology or being
driven by a visceral popular racism” (p. 520).
There have, however, always been severe critics of the “normality”
view of perpetrators. Lang (2014), for example, focuses on the issue of
moral responsibility:

When Milgram invoked the Holocaust as an analogy to his obedience


experiments, he inadvertently deprived the Nazi genocide of its historical
meaning and relegated perpetrator behavior to a function of hierarchical
190 HARMING OTHERS

social structures. The result, which continues to exert considerable influ-


ence both inside and outside the discipline of social psychology, is an
ahistorical explanation of perpetrator behavior that eviscerates any
forceful conception of individual agency, reduces political action to acts
of submission, and finally calls into question the very idea of personal
responsibility. (p. 665)

Similarly, the political scientist Alan Wolfe (2011) has noted: “No matter
how flawed Milgram’s work and no matter how unsupported his conclu-
sions, the notion that ordinary people are quite capable of inflicting ter-
rible harm on innocent others when pressured to do so by respected people
in authority has become so widely accepted that nothing is ever likely to
shake it. We are, or so we are told, all part of genocide now” (p. 69). A
number of social psychologists (e.g., Blass, 2004) have criticized the disci-
pline for glossing over the gratuitous brutality of the Holocaust and, pre-
occupied with the obedience experiments, slighting the crucial role of the
Holocaust’s instigators and high-level planners. The exonerating implica-
tions of the Milgram experiments have also been extremely problematic for
some Holocaust analysts (e.g., Mandel, 1998). Valentino (2004, p. 50) has
also pointed out that Milgram’s participants lacked any prior justification
for hating their victim. In fact, from the participants’ perspective, that the
teacher and learner roles had been assigned randomly (Milgram, 1974, p.
18) could reasonably be interpreted as humanizing the learner. Milgram,
in fact, envisioned a study combining the role of malevolent authority and
victim dehumanization (1974, p. 10). This would be extremely important
research, but it remains to be conducted. Most of Milgram’s experimental
variations yielded considerably less than 100% obedience, thus allowing
a clear potential for added punishment directed at a dehumanized learner
(e.g., Bandura, 1999; Haslam & Loughnan, Chapter 8, this volume).
Milgram (1974, p. 6) associated his experimental findings with Han-
nah Arendt’s (1963) provocative description of the high-ranking Nazi offi-
cial Adolf Eichmann as unexpectedly ordinary and primarily motivated to
be a good Nazi bureaucrat—that is, the “banality of evil” (1974, p. 6). For
decades, Eichmann’s photograph has often been included in social psychol-
ogy textbook coverage of the Milgram experiments (e.g., Gilovich, Keltner,
& Nisbett, 2011, p. 11). Relevant to social psychology’s traditional linkage
between Eichmann and the Milgram experiments is the recent publication
(Kershner, 2016) of a previously unknown plea written by Eichmann to the
Israeli Court. Having been convicted and sentenced to death, Eichmann
pointed to the condoning aspects of his situation (to which, he argues, the
judges were insensitive) and that he was a subordinate following orders, not
a leader issuing them:

The judges made a fundamental mistake in their judgment of me, because


they are not able to empathize with the time and situation in which I
The Milgram Obedience Experiments 191

found myself during the war years. . . . there is a need to draw a line
between the leaders responsible and the people like me forced to serve
as mere instruments in the hands of the leaders. I was not a responsible
leader, and as such do not feel myself guilty. (in Kershner, 2016)

A few days after writing this plea, he was hanged on June 1, 1962. Many
would, of course, argue that Eichmann was simply fabricating, demonstrat-
ing what Mandel (1998) has termed the “obedience alibi,” and are, in this
context, strongly against generalizing Milgram’s findings to Eichmann’s
role in the Holocaust. Consistent with this argument are three biographies
of Eichmann which have cast severe doubt on Arendt’s apparently benign
depiction of Eichmann (e.g., Cesarani, 2004; Lipstadt, 2011; Stangneth,
2014). Lipstadt (2011) noted: “In Eichmann’s case her [Arendt’s] analysis
seems strangely out of touch with the reality of his historical record” (pp.
169–170), noting his postwar zeal about having caused the deaths of Hun-
garian Jews in particular, and millions of others. In reviewing a film by
Claude Lanzmann, which features an interview with a formerly imprisoned
rabbi who worked under Eichmann, Lane (2014) states that “Short shrift
is given to Hannah Arendt and her celebrated coining . . . of ‘the banality
of evil.’ The man was far from banal, as Murmelstein explains: ‘He was
a demon’ ” (p. 84). Why have social psychologists been wrong on Eich-
mann? Newman (2001) suggested a bias: “Social psychologists interpreted
her work in a way that was consistent with their preferred conclusion” (p.
16), that is, that Eichmann was unexpectedly normal and not a sadistic
monster. Newman suggests that Arendt’s more complete argument may
in fact have been less exonerating toward Eichmann (see Lang, 2014, for
a similar conclusion). The banality of evil remains a powerful idea. How-
ever, Eichmann now appears to have been “the wrong man” for this role.
Also, it is likely that not all of Milgram’s obedient participants were banal.
Different motives could have driven the obedient behavior (e.g., Reicher &
Haslam, 2011).
The relevance of Milgram’s findings for harm doing in complex organi-
zational structures—the Nazi regime being a case study—has emphasized
the ambiguous and, at times, intentionally hidden moral responsibility and
habitual obedience at different levels of authority or subordination (e.g.,
Kelman & Hamilton, 1989; Russell, 2014; Werhane et al., 2013). Ferma-
glich (2006) noted that “Milgram’s work, with its emphasis on the ordinary
evils of the bureaucracy, made a significant impact on Holocaust scholars
that lasts to this day” (p. 116). Leadership, of course, is also paramount in
“successful” organizations. As Valentino (2004) stated: “The centerpiece
of Hitler’s political philosophy was the ‘Fuhrer Principle,’ a doctrine that
demanded unquestioning obedience to a single leader” (p. 62).
Overy (2014), emphasizing the power of authority, considers as a major
finding Milgram’s evidence (1974, Chapter 3) that his colleagues predicted
that very few volunteers would go beyond the lowest shock levels. People
192 HARMING OTHERS

similarly underestimated their own obedience. Why does this occur? Epley
(2015) has suggested that “you know your conscious aversion to harming
another person, but you miss the difficulty you would have in saying no
to a clear and reassuring authority figure in the heat of the moment after
having lived much of your life following orders from authority figures. You
are missing the construction that happens inside your own brain” (p. 29).
Overy (2014), similarly, notes:

In most cases, the power of ordinary people to confront or withdraw


from the imperatives to comply, whether these are intellectual or institu-
tional or social or psychological, is evidently much more limited that the
black/white division between perpetration and dissent would suggest. If
the power of ordinary people to say no were greater, they would perhaps
say no more often. (p. 527)

In a recent, particularly thoughtful and compelling analysis of geno-


cide and mass murder, de Swann (2015) discusses the Milgram experiments
in some detail. Although harboring questions and doubts about the experi-
ment’s direct relevance to historical genocide as well as participants’ actual
view of the experimental set-up, he acknowledges the value of Milgram’s
findings, in a manner very similar to Overy (2014):

The experiment does not warrant either the popular conclusion that in
real life most people will comply with a genocidal dictatorship or its con-
trary, much less popular conclusion that a sizable minority would resist it
in reality. It is hard to decide whether the subjects actually believed they
were administering severe torture or even a death penalty to their subject
or, rather, that they experienced the setup as a very serious game. More-
over, the difference between the compliers and the naysayers was never
thoroughly explored by Milgram and his followers. But Milgram did
establish that people are much more prone to obey a person of author-
ity ordering them to harm someone else than just about anyone had
expected before his experiments. (de Swann, 2015, p. 254)

This shattering of the illusion of personal autonomy appears to be a partic-


ularly important discovery by Milgram, particularly when one remembers
that his research took place barely over a decade following the Holocaust.

Fame: The Result of an Enduring Great Idea


The physicist Freeman Dyson (2015) recently noted, in discussing the fame
of Albert Einstein:

But there are a few scientists whose lives and thought are of perennial
interest, because they permanently changed our way of thinking. To the
few belong Galileo and Newton and Darwin, and now Einstein. For the
select few, there will be no end to the writing of books [about them]. New
The Milgram Obedience Experiments 193

books will need to be written and read, because these people had endur-
ing ideas that throw light on new problems as the centuries go by. (p. 14)

Many admirers of Milgram’s experiments would add his name to Dyson’s


list. For example, Benjamin and Simpson (2009):

It is critically important to understand why this study so captivates psy-


chologists and psychology students today as well as ordinary citizens. It
has all the elements of an experimentum crucis, an experiment designed
to answer a question of great importance. It asked a big question, an
important question—how far will a person go in inflicting severe pain
on a stranger when instructed to do so by an authority figure? It is not
just a psychological question. It is a moral question. Indeed it is a societal
question. (p. 17)

In a recent essay, Gazda (2015) describes painful distant memories of grad-


uate school dealing with his administering punishment to laboratory rats
and other animals. He is

astonished that the daily grind of depriving, shocking, and killing these
animals did not move me to leave my job. . . . My rationalization is that
I was a student and younger worker in institutions of higher learning,
programmed to receive the wisdom of academia. I was studying how the
science that supposedly advanced our civilization was done. (p. 8)

He then adds:

Speaking of his infamous experiments in which human subjects followed


orders thinking they were giving extremely painful shocks to other
humans, the Yale University psychologist Stanley Milgram said, “Ordi-
nary people, simply doing their jobs, and without any particular hostil-
ity on their part, can become agents in a terrible destructive process.” I
think that describes me pretty well. (p. 8, emphasis added)

Milgram himself considered it “the most fundamental lesson of our study”


(1974, p. 6). Gazda’s reference to Milgram in a context of animal research
would likely spark more controversy. Nevertheless, his quotation would
define, for many, Milgram’s most enduring idea.

Research on Obedience to Authority


Milgram’s Original Findings and Current
Theoretical Explanations
Milgram’s major findings were: (1) an unexpectedly high rate of obedi-
ence (65%) in the baseline experiment (1963), indications of extreme stress
in many participants, and evidence that observers vastly underestimated
194 HARMING OTHERS

the actual obedience when given descriptions of the basic paradigm; (2)
sharply varying obedience rates in over two dozen situational variations,
including a predominance of disobedience in many of them (1974) (for an
informative comparative analysis of the obedience rates in 21 of Milgram’s
experimental variations, including several studies from the Yale archives
not included in Milgram [1974], see Haslam, Loughnan, and Perry [2014]);
and (3) considerable individual differences among participants within the
experimental variations. Two chapters (1974) provided a rich, if highly
impressionistic, analysis by Milgram of specific individuals’ reactions to
being in his study. Personality factors were intended for further research,
but only one brief study was conducted (Elms & Milgram, 1966). (For
informative discussions of the origins of the experiments, see Blass [2004]
and Russell [2011]).
Current explanations largely echo Milgram’s own situational accounts
but cite important empirical support (not available to Milgram) for the
relevant processes—although in research contexts not dealing with obedi-
ence per se. Burger (2014) identified four key situational factors likely to
have increased obedience: (1) small increments in administering the shocks,
(2) the novelty of the situation and the absence of normative information
regarding appropriate behavior, (3) an opportunity to deny or diffuse
responsibility, and (4) little opportunity to reflect because of the fast pace
of the tasks. Miller (2014) also emphasized the factors of moral disengage-
ment and dehumanization, for example the greater punishment inflicted
with increasing physical distance from the victim (e.g., Bandura, 1999),
and what Milgram (1974) termed “cognitive tuning,” in which subordi-
nates are often primarily oriented to those individuals with greater power
or authority (e.g., Werhane et al., 2013). Note that these constructs or pro-
cesses are all tied to specific situational arrangements or roles and do not
deal with personality factors or individual differences in moral judgment or
reasoning among those participating in the studies.
The explanatory status of Milgram’s central concept of the agentic
state (the participant’s ceding personal responsibility to the authority) is at
best marginal. Reicher and Haslam (2011), for example, argued that this
construct cannot readily account for the extremely varied rates of obedi-
ence across his experimental variations, as, presumably, the agentic-state
orientation, activated by the presence of legitimate authority, would have
been operative in all of them. Beauvois et al. (2012) report positive evi-
dence, but most recent studies do not (e.g., Burger, Girgis, & Manning,
2011). Nevertheless, some researchers and writers appear to presume the
agentic state’s validity. For example, Moore and Gino (2013) observed
that, when debriefed, many participants in the obedience experiments com-
mented, “ ‘if it were up to me, I would not have administered shocks to
the learner’ (Milgram, 1974). Of course, their behavior was up to them,
but the presence of a legitimate authority figure allowed them to pass off
their responsibility to another” (p. 66, emphasis in original). It is not clear,
The Milgram Obedience Experiments 195

however, how many of Milgram’s participants voiced this particular idea


(i.e., “if it were up to me . . .”), and the certainty with which Moore and
Gino attribute responsibility to Milgram’s participants is hardly a consen-
sus point of view, as will be shown.
The gradual escalation of punishment has been a particularly widely
cited theoretical idea, for which there is an abundant and supportive
research literature. One likely underlying process involves a change in self-
perception during the course of committing increasingly punitive acts. For
example, this concept can explain actors in a “bug-killing” paradigm (e.g.,
Martens, Kosloff, & Jackson, 2010), or observers of harm doing by others.
Hartson and Sherman (2012) demonstrated that the “gradual escalation
of behavior can alter how individuals perceive the problematic behavior of
others, reducing the severity of moral judgments and leading individuals
to hold others less accountable for their actions” (p. 1287). Writing in a
legal context, Perlman (2007) endorsed many of Milgram’s theoretical con-
cepts, but regarding the escalation dynamic, he pointed out that “although
each step up on the shock generator was only fifteen volts, subjects did not
experience each step in precisely the same way” (p. 467). Here, he refers to
the sudden emergence of significant disobedience at the 150-volt stage, an
observation later emphasized by Burger (2009) and Packer (2008).
The lack of theoretical precision in explaining Milgram’s findings
remains problematic, particularly consequential when the experiments are
generalized to other settings—for example, to obedience in fraudulent cor-
porate settings (e.g., Werhane et al., 2013) or to atrocities such as the My
Lai massacre (Milgram, 1974) or the Holocaust. Applications of this kind
rest on the viability of generalizing the theoretical processes underlying
Milgram’s findings to these other contexts and to the underlying causes
for what happened there (e.g., Mook, 1983). In certain instances, one can
indeed find persuasive generalizations of process. For example, the esca-
lation phenomenon, central to Milgram’s basic paradigm, has been cited
by historians as an important dynamic of the Holocaust. Stargardt (2015)
noted, in reference to the 1941 decree that all Jews (over age 5) had to wear
a yellow star:

The yellow star was the most visible nation-wide measure taken since the
November 1938 pogrom (Kristallnacht) and it came into effect across
the whole Reich on the same day, 19 September 1941. Its mandatory
character could not be in doubt and it was immediately recognized as
an important escalation, conditioning how the burghers of Minden
digested the news a few weeks later of the deportation and first mass
shootings of Jews from their town. (p. 236, emphasis added)

Without a convincing theory and a conceptual understanding of the tar-


get domains, however, many generalizations are largely guesswork, and
hence fertile ground for controversies. However, there is a recent promising
196 HARMING OTHERS

development of a theoretical model, with empirical support, to explain


Milgram’s findings (e.g., Reicher et al., 2012), which may significantly
facilitate generalizations. This research is discussed later in this chapter.

Experiments on Obedience to Authority


Using Variations on Milgram’s Paradigm
The first major obedience study in several decades was reported by Burger
(2009). He introduced key procedural changes in order to receive ethi-
cal approval from his university’s institutional review board (IRB): (1) a
detailed informed-consent form, specifying the participants’ freedom to end
their involvement at any time; (2) preexperimental screenings to rule out
anyone who would experience undesired stress (or who had been familiar
with the obedience research)—the experimenter himself was a clinical psy-
chologist; and (3) the decision to terminate the experiment after the 150-volt
level (a unique point in the shock sequence at which the learner, on tape,
first protests loudly and demands to be released). Burger based the 150-volt
maximum on Milgram’s Experiment 5 (1963), which indicated that a clear
majority of those who exceeded 150 volts went to the 450-volt maximum.
Packer (2008) had analyzed disobedience rates in the data tables for eight of
Milgram’s variations (1974) and demonstrated that the highest rate of dis-
obedience (37%) occurred at 150 volts. The underlying ethical premise here,
of course, was that the experiment would be experienced as far less stress-
ful below 150 volts. Burger also included a “modeled-refusal” condition,
in which another participant (an accomplice) refused to continue past the
90-volt switch; this was analogous to Milgram’s “two peers rebel” variation
(1974, Experiment 17), which had shown a marked reduction in the obedi-
ence rate. Out of a much larger initial group of volunteers, 70 males and
females, ranging in age and other demographics, were in the final sample.
The major finding was that 70% of participants were obedient, com-
parable statistically to Milgram’s finding (82.5%). Men and women were
similarly obedient (as had been found by Milgram). However, the modeled-
refusal condition had no effect. In addition, participants who ultimately
refused to continue past 150 volts had received the first prod (to continue)
significantly earlier than those who were willing to exceed 150 volts—sug-
gesting that it is easier to disobey if one starts the hesitation process rela-
tively early. Burger did not mention any expressions of emotional conflict
or strain during the study. These reactions were, of course, one of Mil-
gram’s major findings.
In a follow-up analysis of participants’ (tape-recorded) spontane-
ous postexperimental reports, Burger et al. (2011) observed that very few
obedient but a majority of disobedient participants expressed a sense of
personal responsibility for what happened to the learner. However, these
groups did not differ in the percentage attributing responsibility to some-
one other than themselves, thus not supporting Milgram’s agentic-shift
The Milgram Obedience Experiments 197

hypothesis. In a detailed analysis of the prods, they observed that obedi-


ence was highest for the quite reasonable Prod 1 (“Please continue”), but,
surprisingly, no participant who reached what was viewed as a true order—
Prod 4 (“you have no other choice, you must go on”)—obeyed it. Acknowl-
edging the confounding of the wording of the prods with their sequential
position, Burger et al. (2011) suggested that if obeying orders was not in
fact what Milgram was studying (as implied in refusals to obey Prod 4),
then alternative interpretations of his findings should be considered. As
will be shown, an alternative has indeed been advanced (Haslam, Reicher,
& Birney, 2014). Another possibility, however, is that a number of Burg-
er’s participants may have remembered the explicit assurance about being
free to leave given to them at the informed-consent phase, thus accounting
for those receiving the last (fourth) order-like prod not obeying it. Sieber
and Tolich (2013) found Burger’s (2009) study ethically impressive but are
critical of his continued use of the four strident prods. They recommend
that “the research assistant would tell the research subject only once, ‘The
experiment must continue.’ The second time the subject protests, his or her
right to withdraw must be respected” (p. 58).
Beauvois et al. (2012) conducted an obedience study on the set of a tele-
vision studio in a game show context. In three similar conditions, involving
electric shock as punishment in a quiz game, they reported obedience rates
ranging from 72 to 81%. Similar to Burger’s modeled-refusal condition,
there was no effect of a defiant model; however, a “host withdrawal” con-
dition (analogous to Milgram’s “experimenter absent” variation) lowered
obedience significantly. As Milgram (but not Burger) had observed, these
researchers noted nervous laughter and cheating in the form of cueing the
contestant with voice tones as to the correct answer (thereby not requiring
electric shocks). Similar to Burger’s finding, use of the final prod was inef-
fective, with only one of 16 (hearing it) yielding to it.
Zeigler-Hill, Southard, Archer, and Donohoe (2013) used (apparent)
blasts of white noise as punishment. An obedience rate of 94% was obtained,
as only two participants disobeyed orders (on the last trial). Rochat and
Blass (2014) reported an unpublished study by Milgram titled the “bring
a friend condition.” This was apparently his last variation, conducted at
Bridgeport in 1962 and located in the Yale archives. Of 20 male volunteers,
only 3 went to 450 volts, an obedience rate of 15%. Clearly, for the majority
of participants, the motivation to help a friend in the learner role prevailed
over any obligation to the experimenter. Perry (2013) terms this study Mil-
gram’s “secret” experiment and is critical of his not publishing it.

Different Paradigms
Bocchiaro, Zimbardo, and Van Lange (2012) introduced a new obedience
paradigm to investigate whistle-blowing. The basic design involved pre-
senting participants with an unethical request by an authority—to send a
198 HARMING OTHERS

deceptive letter encouraging their (real) acquaintances to participate in a


painful study that they, however, described as pleasant. Participants had three
options—obedience, defiance, or whistle-blowing (reporting, anonymously,
the unethical study to the university research committee). Obedience was
relatively high, 76.5%, with disobedience low at 14.1% and whistle-blowing
at an unexpectedly low 9.4%. To assess estimated behavior in this setting, an
earlier independent sample was simply given a description of the study and
asked to predict their own (and the average student’s) responses. As had been
shown by Milgram (and others), predicted behavior for oneself and for the
average person differed sharply from actual behavior in their study and in a
radically more socially desirable direction (highest for whistle-blowing, low-
est for obedience). This sharp discrepancy between predicted and actually
observed behavior, with Milgram’s being one of the first empirical documen-
tations, has been a reliable phenomenon for decades in social psychology,
one which testifies to the informational value of social-psychological inquiry.
Bocchiaro and Zimbardo (2010) adopted a variation of Milgram’s
paradigm (Meeus & Raaijmakers, 1986), using “personal insults” instead
of physical punishment. They set up procedures specifically to encourage
disobedience (authority and participant in different rooms, participant and
learner in close proximity). Disobedience prevailed, as only 30% of partici-
pants obeyed all insult instructions (to an accomplice working on a task).
Participants indicated considerable amounts of tension, as well as verbal
utterances of dissent, regardless of whether they were ultimately obedient or
resistant. As observed by Burger et al. (2011) and Packer (2008), there was a
critical decision point at the 11th of 19 trials, in which the performer’s script
required him to try to physically free himself from the apparatus and leave.
At this point 67% of participants immediately disobeyed.
Several studies using a variety of computer-based technologies
and virtual reality simulations have also investigated obedience (e.g.,
Cheetham, Pedroni, Antley, Slater, & Jancke, 2009; Dambrun & Vatine,
2010). The ethical gains of these role-playing studies are important, and
obedience–disobedience rates are generally comparable to those of Mil-
gram (whose findings are always included as the benchmark). Dambrun
and Vatine (2010) used an immersive video environment with a French
sample of participants, with a real actor filmed, recorded, and displayed on
a computer screen. Using a comparable experimental condition (victim hid-
den), they observed an obedience rate similar to Milgram’s baseline result
(1963). Obedience dropped sharply in the visible condition. The inclusion
of a new variable, the victim’s ethnicity (North African), did not result in
significantly higher obedience, though participants were less anxious and
reported a lower level of distress (in comparison with a French-victim con-
dition). Role-playing procedures have, in the past, always generated critics
who doubted their realism, but the new technologies may have the potential
to minimize this self-consciousness.
The Milgram Obedience Experiments 199

Individual Differences in Harmful Obedience


Berger (2009) found no significant differences between obedient and defi-
ant participants on measures of empathic concern or desire for control.
Higher empathy (but not desire for control) scores were related to lower
first-prod scores. Begue et al. (2015) brought most of the participants in
the Beauvois et al. (2012) study back 8 months after the experiment and
administered various measures (e.g., the Big Five personality inventory).
There were small but significant correlations between agreeableness and
conscientiousness with highest shock intensity reached, but not with actual
obedience rates. They also observed a correlation between liberal social
attitudes and lower intensity of shocks delivered, reminiscent of a similar
early finding on authoritarianism by Elms and Milgram (1966). Females
(but not males) who scored higher on political rebelliousness administered
lower shocks. Zeigler-Hill et al. (2013) report that lower neuroticism scores
on the Big Five personality inventory were associated with more prods, that
is, with a greater reluctance to obey the experimenter.
The results reviewed here reveal correlations that are either negative
or relatively small, and the measures are typically different across studies.
Meaningful causes for the dramatic evidence of individual differences in
these studies thus have yet to be discovered. Another individual-difference
issue pertains to the consistency of behavior. Criticizing the dominance
of a situational perspective, Funder (2009) noted: “The classic studies of
social psychology placed subjects into evocative situations (e.g., Milgram’s
obedience experiments, the Zimbardo prison study), but almost never, if
ever, placed the same participants into more than one situation so that
consistent patterns of behavior and associated traits could be detected”
(p. 125; emphasis in original). Perhaps variations on some of the recent
ethical obedience paradigms may be capable of this kind of needed inquiry.
Political attitudes, first suggested by Elms and Milgram (1966), may be
the most promising. Research by Graham et al. (2011) has shown that liber-
als (vs. conservatives) give significantly lower priority to issues of authority/
respect, one of five basic moral virtues in intuitive judgments, a finding
consistent with less obedience shown by participants rating themselves to
the political left in the Beauvois et al. (2012) study. Frimer, Gaucher, and
Schaefer (2014) suggest that past research on the conservative–obedience
relationship has been confounded with the assumption that the authorities
are also conservative. They suggest that liberals are more obedient than
conservatives when the authorities are also liberal, that is, that “obedience
itself is not ideologically divisive” (p. 1205). Is Milgram’s experimenter per-
ceived as relatively politically conservative? The evidence tentatively points
in this direction. Finally, at least a small number of participants in these
studies may in fact enjoy the opportunity to harm the learner. Buckels,
Jones, and Paulhus (2013) reported that college students scoring high on
200 HARMING OTHERS

a sadistic impulse inventory were more cruel on the bug-killing task of


Martens et al. (2010) and more willing to harm an innocent victim in a
(deceptive) computer game.

Summary
The studies reviewed here have successfully demonstrated that predictably
harmful obedience (or resistance) can be elicited from ordinary individu-
als (except in Burger’s use of screening) in an ethically acceptable manner.
Burger’s (2009) groundbreaking study has, not surprisingly, been contro-
versially received (Benjamin & Simpson, 2009; Blass, 2009), some viewing
it as not inducing sufficiently intense moral conflict to carry the same sig-
nificance as Milgram’s original (e.g., Elms, 2009; Miller, 2009). Neverthe-
less, there is clear acknowledgment that the new ethical guidelines must
be followed. Although none of the studies have produced major advances
in terms of a theoretical understanding of their (or Milgram’s) findings,
ideally this will occur in future research. Moral conflict can be intro-
duced ethically in obedience research settings without the extreme anguish
reported by Milgram.
All of the recent studies have used Milgram’s findings as a benchmark
with which to compare their results. For example, Burger (2009) concluded:
“I am as confident as a psychology researcher can ever be that my findings
can be legitimately compared with Milgram’s” (p. 10). These newer stud-
ies, however, are so procedurally different from Milgram’s that compari-
sons of outcomes, in terms of specifying underlying causal processes, are
largely uninterpretable (e.g., Miller, 2009). Perhaps it is the fame of the
Milgram studies that is exerting itself in these understandable but concep-
tually inconclusive comparisons.

At the Center of the Milgram Controversy:


The Problem of Moral Responsibility

There is another source of great controversy, one that is related to gen-


eralizations from the experiments to the Holocaust and other settings.
The focus here is on moral responsibility. To what degree are the obedient
participants in the studies considered fully responsible for their behavior?
There are very strong differences of opinion on this issue. The remainder of
this chapter examines several areas of contemporary research and thought
relevant to this question.

Good Persons Doing Evil


The participants in Milgram experiments have always been viewed as ordi-
nary, presumably good persons (Milgram, 1974, p. 6). Presumptions of
The Milgram Obedience Experiments 201

the ordinariness and good nature of people typically studied by social psy-
chologists and related disciplines are explicit in the titles of many books
and articles. For example (emphasis added):

Why ordinary people torture enemy prisoners (Fiske, Harris, &


Cuddy, 2004).
Contemporary racial bias: When good people do bad things (Dovidio,
Gaertner, Nier, Kawakami, & Hodson, 2004).
The Lucifer effect: Understanding how good people turn evil (Zim-
bardo, 2007a.)
The righteous mind: Why good people are divided by politics and reli-
gion (Haidt, 2012).
Blind spots: Hidden biases of good people (Banaji & Greenwald,
2013).

These titles forecast to readers that their intuitions—that good people like
themselves don’t do these “bad things”—are going to be shattered with
counterintuitive insights and challenges to commonly held, more comfort-
ing views. Certainly the terms ordinary and good seem to be reasonable
descriptions of the typical research populations studied by social psycholo-
gists, for example, the common reliance on college students.
One important if unintended consequence, however, of the “ordi-
nary or good person” focus may be a perceived condoning of the harmful
actions (or thoughts) about to be explained. That good people engage in
bad actions seems to soften the meaning of the harm doing. For example,
research on ingroup versus outgroup biases suggests that a good person,
that is, a member of one’s ingroup, will be judged significantly more leni-
ently for committing a harmful act than will a bad person (outgroup mem-
ber) engaging in the identical act (e.g., Smith et al., 2015, p. 507). There
is also the implication that if ordinary people engage in these undesirable
behaviors, then they are more common than people think, and, in turn,
somewhat less serious—similar to the “prevalence rule” in judgments of
illness (e.g., Martin, Rothrock, Leventhal, & Leventhal, 2003).

Challenging the “Good/Ordinary Person” Thesis


of Situationism
A number of social psychologists, not favoring strong situationism, have
explicitly challenged the “good person–evil deeds” thesis. For example, if
one moves to paid volunteers, as in Milgram (1963), or screened (and paid)
volunteers, as in Zimbardo et al. (2007a) and Burger (2009), there is more
room for questioning the ordinariness of the research sample. Carnahan
and McFarland (2007) hypothesized that there was a selection bias among
Zimbardo’s volunteers. In Carnahan and McFarland’s study, their volun-
teers for a prison study scored significantly higher than a control group on
202 HARMING OTHERS

measures of the abuse-related dispositions of aggressiveness, authoritari-


anism, Machiavellianism, narcissism, and social dominance and lower on
empathy and altruism. These individuals would not seem quite as ordinary
or good as a random sample of college students. Among their counterargu-
ments, Haney and Zimbardo (2009) highlighted the (pretested) normalcy of
their participants and accused Carnahan and McFarland of being disposi-
tionalists, with an erroneously liberal view of the freedom people have: “In
fact, most people’s choices—even about serious or consequential matters—
are often highly situationally constrained” (p. 813).
In his recent very comprehensive analysis of mass murder, de Swann
(2015), a sociologist, acknowledges the ordinariness of many killers, as
well as the power of the immediate situation, but he argues that this view
of “ordinary perpetrators” has been too influential in social science:

But the killers did not just share the collective past of their nation; they
had a biography of their own. They had particular experiences, devel-
oped their individual inclinations and opinions. The critical question
is whether the killers differed from others who lived through the same
times, in the same society, but who did not join the murderers’ ranks.
The search for such differences has been neglected for over half a century
because of this insistence that the genocidaires were all “ordinary men.”
Rather than look more closely at the perpetrators, people were to search
their own soul, since “you and I or anyone under the same circumstances
might have done the same thing.” In fact, no one knows what “anyone”
might do under these circumstances. The question is at once haunting
and meaningless. . . . The idea that in a certain social context, in a given
situation, people will commit acts that they would not dream of other-
wise is quite plausible. Some people are more likely to do so than others,
and some will resist even at considerable personal risk. Others may be
willing and even eager to follow orders. That depends not only on the
situation of the moment but also on their prior experience and personal
history, in a word, a term that with so many words has been declared out
of bounds: on their personal dispositions. And in other words: on their
particular personality. (pp. 263–264)

de Swann (2015) cogently surmises that had most of us of us—good


and ordinary persons at the outset—been raised as children and young
adults in the earlier phases of the Third Reich, “if this had been the course
of our lives, then, yes, maybe then, some of us might have become geno-
cidal killers. But then, you or I, we would have been someone else. The
impact of the genocidal situation on the behavior of the perpetrators has
rightly been stressed in the past half-century. There is a renewed interest in
the influence of historical-cultural conditions on the formation of a geno-
cidal mentality. What is overdue is an assessment of the importance of per-
sonal biography, or the individual dispositions that may lead to genocidal
actions” (pp. 264–265).
The Milgram Obedience Experiments 203

Another issue concerns possible changes that occur in “good” or


“ordinary” research participants. They may be good or ordinary before
being assigned to certain conditions of the experiments, but less so after-
ward. Should Milgram’s obedient participants be considered to be as good
as those who defied the experimenter, especially when the former later jus-
tified their harmful actions toward the learner (e.g., Haslam, Reicher, Mil-
lard, & McDonald, 2015b)? Although successful debriefing represents a
kind of reversal of experimentally induced changes “so that they [research
participants] are restored to a state of emotional well-being” (Sieber & Tol-
ich, 2013, p. 151), the harmful, perhaps immoral conduct in these studies
has in fact occurred in the minds of the participants (and researchers) and
cannot simply be deleted for ethical convenience.
A particularly interesting challenge to the good/ordinary thesis can be
seen in a letter to the editor of the APS Observer (Donnellan, Fraley, &
Krueger, 2007). The authors criticize what they regarded as that journal’s
endorsement (Herbert, 2007) of Zimbardo’s situationist explanation of the
Abu Ghraib prison scandal:

Our concern is that Zimbardo has misrepresented scientific evidence in


an attempt to offer a purely situational account of the antisocial acts
perpetrated at Abu Ghraib. . . . The scientific consensus, based on exist-
ing data, is that people vary in their propensity for antisocial behavior
and that environments transact with personalities. Some people are more
likely to turn out to be bad apples than others, and this is particularly
evident in certain situations. (p. 5)

Below this letter appear 40 signatures of well-known psychologists, most


of them social psychologists with a strong interest in personality (e.g., S.
Alexandra Burt, David M. Buss, Samuel D. Gosling, Robert Hogan, Debo-
rah A. Kashy, David A. Kenny, Laura A. King, Scott O. Lilienfeld, Brent W.
Roberts, William B. Swann, Jr., Jessica L. Tracy, David Watson).
One can readily imagine a virtually identical letter voicing protest
against generalizing Milgram’s studies to the Holocaust. In his rebuttal let-
ter, after some denials of fact (e.g., he agrees that there are “bad apples”),
Zimbardo (2007b) reaffirmed a cornerstone of situationism: “The central
issue is not the relative predictive power of person vs. situation, rather
it is the typical underestimation of situational factors and channel fac-
tors that lead to erroneous predictions and misattributions of behavior
whenever such external factors are operating” (p. 6). Milgram (1979)
had earlier voiced a very similar position: “It is surprising how difficult
it is for people to keep situational forces in mind, as they seek a totally
personalistic interpretation of obedience, divorced from the specific situ-
ational pressures acting on the individual, which . . . we know to be cru-
cial in determining whether the person will submit or rebel” (p. 9). This
204 HARMING OTHERS

underestimation-of-the-situation argument is, however, not persuasive to


opponents, who interpret it as giving clear priority to situational factors
over personality (e.g., Funder, 2009). Situationists still have to account for
individual differences in behavior in the same situation. Jetten and Mols
(2014) have argued, with some empirical evidence, that it was Milgram’s
decision to publish first (1963) only one of his many experiments, specifi-
cally one showing a very high rate of obedience, which was consequential
in many readers’ drawing an incorrect message—that is, that most people
are unexpectedly, blindly obedient. Yet, for decades, millions of college
students have in fact learned about many of Milgram’s specific experi-
ments (showing strong variations in obedience) in generally superb text-
book coverage (e.g., Kassin et al., 2014, Fig. 7.8, p. 282; Smith et al., 2015,
Figs. 10.4 and 10.5, pp. 377, 378). Still, the take-away message may well
be an erroneous dispositional one for many people, informed or not (e.g.,
Reicher et al., 2014).

Rejecting the Accusation that “to Explain Is


to Condone”
There is an interesting phenomenon related to the criticisms of situationism
noted above and, in turn, relevant to the Milgram experiments. This is the
widespread concern among social scientists well known for their analyses
of harm doing and evil that, in the process of explaining the perpetrators’
actions, they will convey an exonerating tone or implication. They are ada-
mant in rejecting this charge. For example (emphasis added):

“To offer a psychological explanation for the atrocities committed by


perpetrators is not to forgive, justify, or condone their behaviors.
Instead the explanation simply allows us to understand the condi-
tions under which many of us could be transformed into killing
machines” (Waller, 2007, p. xvii).
“Throughout this book, I repeat the mantra that attempting to under-
stand the situational and systemic contributions to any individual’s
behavior does not excuse the person or absolve him or her from
responsibility in engaging in immoral, illegal, or evil deeds” (Zim-
bardo, 2007a, p. xi).
“Make no mistake: Understanding is not condoning . . . I learned the
hard way that the public assumes that the understanding excuses
the doing. In a painful BBC interview, victims of torture accused
us of collusion: by explaining the circumstances that could drive
almost anyone to commit immoral heinous acts, we were letting
them off the hook. . . . Social science is particularly vulnerable to
this understanding = excusing view because we explain how situ-
ations cause behavior, even extreme behavior. Observers prefer to
The Milgram Obedience Experiments 205

blame other humans for horrible outcomes, the more horrible, the
more human responsibility must be blamed” (Fiske, 2013, pp. 605–
606).
“Some [historians] fear that explaining the killers’ motives and actions
might seem to lessen their guilt. Auschwitz survivor Primo Levi
warned against trying to understand the murderers, ‘because to
understand is almost to justify.’ Yet every murderer acted with free
will and had no reason to fear punishment if he refused to commit
murder. No explanation can diminish the killers’ terrible guilt”
(McMillan, 2014, p. viii).

The circumstances of the perpetrators are, of course, included in many


scientific explanations of harm doing, violence, and genocide—their social-
ization histories, life conditions, specifics of the harming situations, their
roles as members of social organizations and bureaucracies (e.g., Baumeis-
ter, 1997; Russell, 2014; Staub, 2014). In a court setting, these kinds of fac-
tors can, in principle, be used as mitigating evidence in decisions regarding a
convicted defendant’s sentence. Why, then, is there a resolute refusal on the
part of these scholars to acknowledge that their explanations will suggest,
at least to a degree, an exoneration of the perpetrators’ moral responsibility?
One reason is that scholars may anticipate, correctly, that their readers’
sympathies will be far greater with the victims than with their perpetrators
(e.g., Baumeister, 1997). For example, Mandel (1998) commented on the
Milgram–Holocaust linkage: “It is offensive to survivors (and to our mem-
ories of victims) who know all too well that there was much more behind
the way they were viciously brutalized, mocked, and tormented than a mere
obligation to follow orders . . . care should be taken in how explanations
are communicated, especially when they have a clear potential to cause
harm” (p. 91). That scientific explanations should never be constrained by
the acceptability of their conclusions to certain audiences may be a wor-
thy ideal, but clearly one not always comfortably achieved. Another reason
is that, similar to laypersons, many social scientists may feel that a firm
position on moral responsibility, what Tetlock, Self, and Singh (2010) have
termed a “prosecutorial mindset,” is simply the appropriate professional
value to espouse. They may privately acknowledge that exceptions can be
made, but the general rule, not to exonerate, appears much safer and per-
haps more objective, befitting the scientific role.

The Other Side: Explaining Does in Fact Lead


to a Degree of Exoneration
A number of social psychologists have, however, suggested that an effort to
understand a perpetrator’s motives does in fact lead to a more exonerating
position. For example, Ross and Shestowsky (2012), stated:
206 HARMING OTHERS

If society’s goal is to have a criminal justice system that is not only effec-
tive but also logically and ethically coherent, additional implications of
a situationist perspective come to the fore. One such implication would
surely be a more “forgiving” response to transgressors who have been
subjected to unusually strong situational pressures, including pressures
whose strength is unlikely to be appreciated by lay observers who have
never faced those pressures. Another implication, we would argue, would
be a greater willingness to mitigate punishment in cases where the situ-
ational forces that weighed on the transgressors were ones to which they
did not choose to expose themselves or ones whose impact they could not
have anticipated in advance. (pp. 7–8)

Baumeister (1997), in discussing the nature of evil, took a similar posi-


tion: “Unfortunately, there is ample reason to fear that understanding can
promote forgiving. Seeing deeds from the perpetrator’s point of view does
change things in many ways” (p. 386). These emphases on forgiveness and
mitigation capture the essence of the controversy regarding exoneration.
Contrast these expressions with Fiske’s (2013) previously noted disclaimer,
“Make no mistake: Understanding is not condoning.”
From Ross and Shestowsky’s perspective, Milgram’s obedient par-
ticipants should clearly be judged more leniently than might be the case
were their situation to be minimized or ignored. They acknowledge that
wrongdoers are often free to act otherwise (e.g., the defiant participants
in obedience experiments) but insist that a just treatment of perpetrators
demands a fuller, more sophisticated appreciation of the power of situa-
tional forces. In legal decisions, responsibility for harmful conduct is often
partitioned among several parties, thereby in effect exonerating some
perpetrators, perhaps many, from at least total responsibility (e.g., Card,
2005; Kelman & Hamilton, 1989). Were the learner to be (hypothetically)
harmed in Milgram’s study and to pursue legal action, the list of defen-
dants could be quite long: the teacher, the learner himself, Milgram, his
research assistants, Yale University, and the government agency funding
the research (e.g., Hanson & Yosifon, 2003). Presumably we would want
exoneration for a close friend, a loved one, or ourselves were these indi-
viduals the accused perpetrator (e.g., Shalvi et al., 2015). Exoneration thus
appears to be dependent, to a degree at least, upon whose wrongdoing is
at issue and who are standing in judgment of it.

Do Situational Explanations Have Exonerating Implications?


Research Evidence
Newman and Bakina (2009) presented participants with descriptions of
the results of two hypothetical studies of wrongdoing (cheating and domes-
tic abuse). Participants were randomly assigned to one of three types of
The Milgram Obedience Experiments 207

explanations for those results—dispositional, situational/contextual, and


interactionism. They then made judgments regarding the similarity of the
researcher’s explanations to their own beliefs and other judgments from
three perspectives—their own, the researcher’s, and the average student’s.
The key finding was that “participants assumed that researchers promoting
what has been referred to as the ‘most central, defining’ message of social
psychology—that is, those arguing for the strong situational causation of
evil behavior—were less likely to believe that evildoers could be blamed
or held responsible for their wrongdoing than those presenting disposi-
tional accounts” (p. 265; emphasis added). Participants were most favor-
able to the interactionist and least to the situational explanation. Blame
and responsibility judgments were relatively high regardless of perspective
or which explanation they read—thus a clear rejection of situationism and
a reluctance of participants to exonerate perpetrators. These findings rep-
licated those observed by Miller, Gordon, & Buddie (1999). (See Tang,
Newman, & Huang, 2014, for confirming evidence.)
Why do lay persons prefer dispositional or interactionist explanations?
In responding to an essay by Lilienfeld (2012a), in which he discussed various
reasons underlying public skepticism about psychology, Newman, Bakina,
and Tang (2012) suggested that “laypeople are suspicious of accounts of
human wrongdoing that feature situational/contextual facts, and they pre-
fer dispositional ones” (p. 805). Lilienfeld (2012b), in response, noted: “To
the extent that social psychologists who communicate with the media at
times underplay the potency of dispositional influences on behavior and
portray situations as generating a virtually ineluctable drive toward anti-
social actions, public skepticism of such proclamations may be warranted”
(p. 809). (Lilienfeld signed the “not so situational” protest letter against
Zimbardo.) Observers of harm doing often respond to perpetrators in terms
of anger and attributions of blame and deserved punishment (e.g., Tetlock et
al., 2010). From this perspective, the partial exoneration of perpetrators—a
direct implication of situationism (e.g., Ross & Shestowsky, 2012)—can
readily be seen as violating norms of retribution and fair justice (e.g., for
interesting further discussions of these issues, see Levy [2015] and Ciurria
[2013]).

Haslam and Reicher’s Social-Identification Theory


and Research on Obedience
An important advance in understanding of Milgram’s findings has been the
research of Reicher and Haslam (e.g., 2011), who have developed a social-
identification model and conducted several experiments to test its valid-
ity. According to their account, the participant is a highly engaged, active
decision maker, not a relatively passive respondent to contextual forces,
208 HARMING OTHERS

which, in Reicher and Haslam’s view, has been the prevailing depiction of
situationist accounts for decades. In their model, the participant (teacher)
ultimately must decide with whom to identify from the two competing
voices in Milgram’s paradigm—the increasingly demanding experimenter
or the protesting learner. As the experiment begins, the experimenter’s
role is, of course, more salient, but this can change for many participants
over the course of the study. Ultimate obedience or disobedience is thus
“predicated upon perceptions of shared identity” (2011, p. 168). Reaching
the decision as to “with whom shall I identify?” may be fraught with dif-
ficulty for many participants, accounting for the intense emotion, nervous
laughter, and conflict described graphically by Milgram (1963).
Haslam and Reicher have published four studies testing their model.
In the first, Reicher et al. (2012) asked a group of individuals to read a
synopsis of Milgram’s baseline study, then summaries of 15 of Milgram’s
variations (1974), rating each as to the degree to which (1) each situation
would likely have motivated participants in the teacher role to identify with
the experimenter as a scientist or (2) to identify with the learner as a mem-
ber of the general public. The investigators then correlated the two ratings
with the actual rates of obedience reported by Milgram (1974). The results
strongly supported their prediction. Identification with the experimenter
correlated positively with percentage of obedience, whereas identification
with the learner correlated negatively. Reicher et al. (2012) concluded that
rather than the blind obedience frequently ascribed to Milgram’s experi-
ments, what is more accurately revealed is “an act of engaged followership
that flows from social identification with those in positions of leadership”
(p. 322).
In a second study, Haslam, Reicher, and Birney (2014) hypothesized
that Prod 4 was ineffective in the study by Burger et al. (2011) because the
harshly worded “You have no other choice, you must go on” produced reac-
tance, rupturing any bond that participants might have established with
the experimenter. They first asked a group of individuals to read the four
prods (verbatim from Milgram) in random order and to rate each of them
as to whether the prod represented an order, a reference to the scientific
importance to the experimenter, a request, or a justification for continuing.
The results clearly indicated that Prod 4 was viewed uniquely as an order.
These results enabled the investigators to ask their key question in the next
study: What effect would Prod 4 (and the other prods) have on participants
specifically when not confounded by its order of presentation (i.e., always
following Prods, 1, 2, and 3)? Haslam, Reicher, & Birney (2014) devised an
increasingly unpleasant task of asking participants to choose highly nega-
tive trait adjectives to describe increasingly pleasant groups (e.g., photos
of the Ku Klux Klan, a family group on a walk). The results indicated that
participants assigned to Prod 2 continued much further in the trial series
than those exposed to Prod 4 and were much more likely to complete the
The Milgram Obedience Experiments 209

study’s postexperimental measures. The authors concluded that their find-


ings do not support Milgram’s theory (emphasizing obedience to orders) but
rather “an engaged followership model . . . that participants’ willingness to
continue with an objectionable task is predicated upon active identification
with the scientific project and those leading it” (p. 473). In their view, con-
tinuing a course of destructive action is not a slippery slope “down which
perpetrators fall helplessly, but rather a mountain that has to be climbed
with, and for, a purpose” (p. 484).
In a third study, on the assumption that Milgram’s participants’ post-
experimental comments could reveal attitudes relevant to the social-identi-
fication model, Haslam, Reicher, Millard, and McDonald (2015b) exam-
ined Box 44 in the Milgram archives. A content analysis was formed on
the cards in this box, each containing a respondent’s qualitative comments,
expanding on the questionnaire they had filled out at Milgram’s request,
reported in Milgram’s (1964) rebuttal to Baumrind. Major themes derived
in this analysis included support for Milgram’s project (e.g., “felt pride at
being chosen to participate,” p. 71), support for the behavioral investiga-
tion (e.g., “I consider research essential to progress and think everyone
should encourage and aid when possible,” p. 73), and support for Yale (e.g.,
“My only desire is that someday I will be able to afford to go and learn
more about the human race. And I hope I will be able to attend the greatest
school in this country Yale of course,” p. 74). Haslam and colleagues report
that there were relatively few negative comments (e.g., “I have no way of
knowing if the experiment was of value or whether further experiments
are necessary to prove anything,” p. 74; “To me, psychology remains a
pseudoscience. These types of studies are little value in the study of human
behavior since the variables are too many,” p. 74).
The authors concluded that the vast majority of the comments relat-
ing to participants’ experience are indeed positive and have to do either
with the benefits of scientific inquiry or the worthiness of the institution
that promotes the scientific value of research. Although the content of
the cards included references to the stress and dilemmas faced during
the experiment, there appears to have been a pronounced lack of stress
after the experiment, coupled with the generally positive evaluation and
engagement with the experiment. Haslam, Reicher, Millard, et al. (2015b)
concluded: “A belief in the science does not remove the dilemma that par-
ticipants face. It does not make them unaware that they are inflicting pain
. . . but it does help them resolve the key dilemma they confront, it does
help them live with themselves, and . . . makes them prepared to support
more of the same in the future” (p. 76). Haslam, Reicher, Millard, et al.
(2015b) regard these findings as consistent with their engaged-follower
explanation but not with the agentic-state account. They point, however,
to another ethical issue in suggesting that Milgram, in his debriefing,
succeeded in making participants comfortable at the cost of accepting an
210 HARMING OTHERS

ideology (science) with considerable potential for social destructiveness.


They conclude:

Our close investigation of what Milgram’s participants had to say about


their experiences in his studies points to the way in which “science” itself
has the potential to be invoked as a “warrant for abuse.” This in turn
suggests that the horizons of scientific ethics (and the practices of eth-
ics committees) need to extend beyond a concern to ensure that par-
ticipants are content, to also reflect on what they are encouraged to be
content about. Certainly, all the evidence suggests that Milgram’s par-
ticipants were “happy to have been of service.” As scientists, we need to
ask whether this is the kind of service with which we want people to be
quite so happy. (pp. 79–80)

There were likely a variety of motives driving post-experimental “hap-


piness,” for example, the operation of cognitive dissonance reduction
(i.e., justifying or rationalizing their immoral deeds in order to achieve a
relatively stable and positive sense of self [e.g., Cooper, 2007]). Identify-
ing with the experimenter may have been one means of reducing disso-
nance. Perhaps other participants simply enjoyed the experiment, seeing
it as stressful but also interesting and important, as millions of others
have found the experiments over the past 5 decades. In their most recent
study, Haslam, Reicher, and Millard (2015a) used a technique called
digital immersive realism, in which professional actors essentially played
the role of participants in a version of the Milgram paradigm. A variety
of features, including the size and appearance of Milgram’s laboratory,
were replicated. Paid actors were prepared ahead of time by a director to
learn the role of someone who would participate in a social psychology
experiment. The aim was to separate the actor’s (real) self from the role
he or she played, thereby enabling the research to be clearly ethical. The
actors took part in a modified restaging of Milgram’s new baseline condi-
tion and four of the experimental variations in Milgram’s (1974) series of
studies (e.g., two peers rebel; experimenter absent; learner proximal to
participant; no learner feedback [remote]). In this technique, a major goal
was to obtain behaviors that would indicate that the actors had genuinely
entered the role. Hence their behaviors would need to resemble Milgram’s
behavioral findings, not the vastly underestimated (idealistic) estimates
noted earlier.
Major findings included: 100% of the participants in the baseline con-
dition administered shocks exceeding 150 volts, on average reaching 300
volts; only 1 of 10 participants received what the investigators regarded as
the only true order, that is, Prod 4, and obeyed it; and the average shock
level across the variations corresponded closely with Milgram’s original
findings (r = .59, p < .03). Most participants were agitated and stressed
The Milgram Obedience Experiments 211

during the study but expressed, during debriefing, being pleased to have
been in it. To test their social-identification model, Haslam, Reicher, and
Millard (2015a) first asked participants to rate their degree of identifica-
tion with the learner and the experimenter. Average ratings were similar
on both measures and higher than the midpoint. Thus the moral conflict,
crucially important in Milgram’s original conception of the studies, seemed
to be present in the participants’ perceptions of the situation they were in.
The primary effect seemed to be lesser punishment correlating with higher
identification with the learner (r(13) = –.59, p < .03) and an insignificant
though positive correlation between identification with the experimenter
and more punishment (r(13) = .36, p < .21). The authors acknowledge fully
the limitations of this research in terms of small sample size and relatively
extraordinary time and expense involved in the use of trained actors. A
useful next step would incorporate an experimental manipulation of social
identification as an independent variable, as a complement to the correla-
tional results of these studies.
Haslam, Reicher, and Millard (2015a) concluded that their study pro-
vides clear evidence to support claims that obedience is not an ineluctable
proclivity but a choice, seen most vividly when almost all participants
refuse to “obey” the fourth prod. This is not simply less obedience but
active resistance, in their view: “Far from . . . showing that the landscape of
tyranny is bereft of human agency, we see instead that identity-based choice
is what makes tyranny possible—and also what makes tyranny vulnerable
to overthrow.”
Collectively, their four studies provide powerful support for Haslam
and Reicher’s model. A particular virtue of their theory is that it is a general
conceptual approach applicable to all of Milgram’s specific variations (not
just the baseline experiment), as well as to generalizations that have been
made to the Holocaust and other contexts. Social-identification theory
emphasizes the role of leadership and followers who are strongly (or per-
haps weakly) identified with the leader’s mission. For example, in describ-
ing female Nazi guards at the Majdanek concentration camp, Evans (2015)
quotes from Elissa Mailander: “She uses memoirs, filmed interviews, and
postwar trial . . . records to show how the experience of almost unlimited
power over the inmates brought them to identify closely with the regime,
whose propaganda and pressure to deliver ‘results’ strengthen the women’s
allegiance to Nazi ideology” (p. 54). One sees elements of both the Milgram
and Zimbardo studies here. This is not to minimize the role of situational
pressures on relatively ordinary, lower-level perpetrators as one causal fac-
tor in the Holocaust (e.g., Browning, 1992), but clearly the emphasis here
on leadership and zealous ideological followers is very different from the
typical social-psychological portrait of the morally conflicted participants
in the Milgram experiments.
212 HARMING OTHERS

Who Is Responsible?
The question “Who is responsible?” is central to the Milgram experiments
and to the generalizations made from those studies. Milgram himself changed
his views on responsibility. In an early response to Baumrind’s (1964) ethical
critique, he noted, “The critic feels that the experimenter made the subject
shock the victim. This conception is alien to my view. . . . I started with the
belief that every person who came to the laboratory was free to accept or
to reject the dictates of authority” (1964, p. 97). Later, however (1974), the
central construct in his model of obedience, the agentic state, portrayed the
obedient person as abandoning a sense of personal responsibility: “The most
far-reaching consequence of the agentic shift is that a man feels responsible
to the authority directing him but feels no responsibility for the content of
the actions that the authority prescribes” (pp. 145–146; emphasis in origi-
nal). One complexity is whether responsibility is defined from the perpe-
trator’s view or that of an observer. Are subordinates being sincere when
claiming that their harmful actions were performed under orders? Military
law requires disobedience to illegal commands, but the “duty to obey and
the duty to disobey” are at times very difficult decisions (e.g., Kelman &
Hamilton, 1989). Milgram seemed clear on the sincerity issue: “The disap-
pearance of a sense of responsibility is the most far-reaching consequence
of submission to authority” (1974, p. 8). Still, “following orders” can at
times be used as an excuse or lie (noted earlier in connection with Eich-
mann’s plea) to evade punishment. (For an imaginative neuroscience-based
study supporting the potential sincerity of the “I was only following orders”
claim, see Caspar, Christensen, Cleeremans, and Haggard, 2016).
Card (2005), in a philosophical analysis of individual responsibil-
ity in organization contexts, recognizes the power of situational forces
but suspects that Milgram’s obedient participants were still, to a degree,
responsible: “Requiring one to abstain from cruel actions against defense-
less persons is not unreasonably demanding” (p. 399). Milgram noted that
in bureaucratic situations (e.g., the Holocaust), “it is psychologically easy
to ignore responsibility when one is only an intermediate link in a chain
of evil action but is far from the final consequences of the action” (1974,
p. 11). Card, agreeing with Milgram’s legalistic perspective that in complex
organizations there can be a fragmentation of both action and responsibil-
ity, concluded: “both individuals and organizations bear moral responsibil-
ity for actions performed within organization contexts, and the degree of
responsibility is to be determined on a case by case basis” (p. 402).

Haslam and Reicher’s Position on Moral Responsibility


Of particular interest are Haslam and Reicher’s strong conclusions regard-
ing the moral responsibility of those individuals harming others under the
The Milgram Obedience Experiments 213

presence of authority: “Followership of this form is not thoughtless. It is the


conscious endeavor of committed subjects” (Reicher et al., 2012, p. 323;
emphasis added); “It is time to reject the comforts of the obedience alibi. It
is time, instead, to engage with the uncomfortable truth that, when people
inflict harm to others, they often do so wittingly and willingly” (Haslam,
Reicher, and Millard, 2015a; emphasis added); “To understand tyranny,
we need to transcend the prevailing orthodoxy that this tyranny derives
from something for which humans have a natural inclination, a ‘Lucifer
effect’ to which they succumb thoughtlessly and helplessly (and for which,
therefore, they cannot be held accountable)” (Haslam & Reicher, 2012a;
emphasis added).
In an earlier experiment, Reicher and Haslam (2006) reported on their
BBC prison study, a partial replication (televised) of Zimbardo’s prison
study that largely failed to reproduce his findings. However, this was not
totally unexpected, as their study was designed precisely to test social iden-
tification theory, with different expected outcomes. Their discussion sec-
tion, highly critical of Zimbardo’s study, understandably provoked Zim-
bardo (2006), who in turn rebutted both their study and their criticisms of
him with his own view: “These actions also make evident to me that their
underlying personal objective was to highjack this media opportunity to
advance their evangelical worldview that rational people with free will can
rise above situational forces to live in communal harmony as long as they
can sustain a social identity in accord with prosocial values and norms of
their community” (p. 52). Haslam and Reicher (2006) then responded, in
part, as follows: “The fact that Zimbardo’s analysis of those events [Abu
Ghraib] was invoked in order to deny responsibility for acts of appall-
ing brutality should also serve as a warning to social psychology. For . . .
it points to the way that our theories are used to justify and normalize
oppression, rather than to problematize it and identify ways in which it can
be overcome. In short, representing abuse as ‘natural’ makes us apologists
for the inexcusable” (p. 62, emphasis in original). It is hard not to imagine
a similarly vitriolic exchange occurring between Haslam and Reicher and
Milgram, were he still alive. The larger and more significant issue here is
not their personal attack on Zimbardo but rather their highly critical view
of the condoning aspects of situationism itself. In this context, they are
speaking for many of Milgram’s critics, who for decades have argued along
very similar lines. It is this extreme disparity of conceptual views on moral
responsibility, as illustrated by Haslam and Reicher on the one hand and
by Ross and Shestowsky (and, by extension, Zimbardo and Milgram) on
the other, that constitutes the most profound and enduring controversy
surrounding the Milgram experiments.
The issue of recommended punishment becomes an immediately rel-
evant consideration. Relatively severe punishment for obedient perpetrators
would seem to follow logically from Haslam and Reicher’s model: “It is
214 HARMING OTHERS

striking how destructive acts were presented as constructive, particularly


in Milgram’s case, where scientific progress was the warrant for abuse”
(Haslam & Reicher, 2012a). That is, in their view, obedient participants
identified with the destructive scientific ideology of the experimenter (i.e.,
to inflict severe shocks for errors) and hence would warrant a harsh pen-
alty were it in a nonlaboratory setting. Conversely, a more lenient sentence,
taking into account the mitigating role of situational factors, would clearly
follow from Ross and Shestowsky’s analysis. There is, fortunately, a new
line of research that tests these ideas.

Beliefs in Free Will, Moral Responsibility,


and Punishment
Two studies have examined the effects of beliefs in free will on judgments of
moral responsibility and punishment. Vohs and Schooler (2008) randomly
assigned participants to read an anti-free-will essay or a control essay. Par-
ticipants in the anti-free-will condition cheated more on a subsequent task.
A second experiment included a pro-determinism essay condition, as well as
a neutral control. Those assigned to the determinism condition cheated sig-
nificantly more than in the other conditions. Mediation analysis indicated
that lower beliefs in free will were the significant determinant of cheating.
The authors concluded: “The fact that brief exposure to a message asserting
that there is no such thing as free will can increase both passive and active
cheating raises the concern that advocating a deterministic worldview could
undermine moral behavior” (p. 53). Why does dismissing free will lead to
wrongdoing? Vohs and Schooler suggested, in a parallel to Milgram’s agen-
tic state: “Doubting one’s free will may undermine the sense of self as agent”
(2008, p. 54). From this perspective, participants in the obedience studies
continue to shock the learner because the experiment (e.g., the prods) effec-
tively places them in a “low free will belief” situation, allowing them to
pursue a harmful course of action with a greater sense of self-justification.
This motivation would seem very different from that advocated by Haslam
and Reicher, that is, the obedient participant’s active, freely chosen social
identification with the experimenter’s scientific mission.
In a related study focusing on observers, Shariff et al. (2014) asked
whether a reduced belief in free will would lead people to see wrongful
behavior as less morally reprehensible, resulting in less retributive punish-
ment. Two studies supported this hypothesis. A third study both hypoth-
esized and observed that exposure to a neuroscience article (emphasizing
a deterministic view of human behavior) would reduce the reader’s belief
in free will and thereby reduce retributive punishment. Perceived blame
mediated the effects of conditions on punishment. Thus the neuroscience
article led to exoneration, that is, less blame and, in turn, less punish-
ment. A fourth study showed lower beliefs in free will and recommended
The Milgram Obedience Experiments 215

punishment after as compared to before taking a neuroscience course. No


differences occurred in a geography control course. Exposure to a strongly
situationist perspective in social psychology can be likened, functionally, to
the neuroscience article or course in the Shariff et al. (2014) studies. Both
send relatively low-free-will messages and are thus seen as exonerating.
Thus we now have important empirical proof for the exoneration
that is often attributed to situational explanations. From this perspective,
instructors and researchers discussing or favoring a more purely situation-
ist account of harm doing should address the condoning implications in a
forthright manner, including their controversiality, and not simply (inten-
tionally or not) leaving them out of the discussion. Their readers or students
will almost certainly be thinking about these matters on their own, based
on the research reviewed here. The analysis of Ross and Shestowsky (2012)
is a particularly useful resource for this type of discussion, regardless of
whether one endorses it or not. Thus informed, readers can then draw their
own conclusions and have a more nuanced, clearer rationale for holding
them. (One envisions some spirited discussions here.)

Conclusions

The continuing controversial status of the Milgram experiments is under-


standable, because deeply held values and attitudes about causality and free
will, as well as moral responsibility and punishment, are driving the polar-
ized opinions. Controversy is sometimes a strength, not a weakness. For
example, the legal justice system is not viewed as flawed simply because of
the continuous adversarial positions of the defense and prosecution. These
biases are, in fact, desired and built into the system itself. As long as situa-
tionists claim conceptual ownership of the obedience studies, there is going
to be considerable opposition. A more pronounced interactionist position
would seem one viable resolution to some of the conflict discussed in this
chapter (e.g., Newman & Bakina, 2009). The social-identification model of
Haslam and Reicher is clearly the most promising conceptual development
at the present time. Their conceptually focused research is particularly rel-
evant because one major reason for the endless controversies concerning
the Milgram experiments has been the absence of a convincing, empirically
based theory. For decades, this significant gap has been filled with specu-
lations and personal views regarding the basis of Milgram’s findings and
their implications. Consider Mandel’s (1998) disparaging phrase, “obedi-
ence alibi,” in reaction to Milgram’s stating, in the Nuremberg context,
that “it would be wrong to think of it [the claim of following orders] as
a thin alibi concocted for the occasion. Rather, it is a fundamental mode
of thinking for a great many people once they are locked into a subordi-
nate position in a structure of authority” (1974, p. 8). As long as these
216 HARMING OTHERS

constructions of what happened are simply left to resonate in thin air, the
controversy will continue. A subordinate’s sincerity could be legitimate or
fabricated, but the only way to resolve this disagreement would be theory-
driven research on underlying processes and a convincing methodology to
ask, and not prematurely answer, the “is it an alibi?” question.
Jetten and Mols (2014) have noted that “a complete account of what
makes people obedient also has to focus on what makes people disobedient”
(p. 591). Researchers should also account for resistance (e.g., Einwohner,
2014; Gibson, 2014; Hollander, 2015). (See also Haslam & Reicher, 2012b,
for an instructive analysis of prison resistance.) As noted, understanding
precisely why different specific individuals in the same experimental condi-
tion decide to obey or withdraw remains a major unanswered question in
the obedience story. This chapter has attempted to explore the important
dimensions of the Milgram controversies. It has not attempted to resolve
these controversies nor to promote a specific conceptual ideology, although
elements of bias may be apparent. Inevitably, and appropriately, readers
will have their own views on these intriguing matters.
A closing thought: An interesting question would be what factors draw
professional social psychologists to adopt the different, often antagonistic
theoretical orientations we have considered here. Information on this dif-
ficult issue could provide greater insight into the continuing controversies
regarding the Milgram experiments. For example, consider their basic posi-
tion on punishment. Evidence indicates that for laypersons, at least, having
strong beliefs on free will or autonomy is caused by an underlying desire to
blame and punish perpetrators (Clark et al., 2014). That is, if one gives pri-
ority to the idea that perpetrators should be punished, one then is motivated
to adopt a specific theory on matters pertaining to free will that will justify
that punishment. Carey and Paulhus (2013) have suggested that holding to a
politically conservative worldview may drive both strong beliefs in free will
and harshly punitive reactions to perpetrators of harm. There are doubtless
a number of reasons why social psychologists select themselves into differ-
ent theoretical camps. Clearly, they are not ordinarily viewed as primarily
driven by thoughts of punishment (or exoneration). These motives could
nevertheless be powerful, even if subtle and unwitting, influences—that is,
causes rather than solely effects of their specific theorizing—thus account-
ing for some of the controversies and heated rhetoric noted in this chapter. It
is hardly the whole story, but it remains an intriguing possibility.

References
Arendt, H. (1963). Eichmann in Jerusalem: A report on the banality of evil. New
York: Penguin.
Banaji, M. R., & Greenwald, A. G. (2013). Blind spots: Hidden biases of good
people. New York: Delacorte Press.
The Milgram Obedience Experiments 217

Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Per-


sonality and Social Psychology Review, 3, 193–209.
Baumeister, R. F. (1997). Evil: Inside human violence and cruelty. New York:
Freeman.
Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s
“Behavioral study of obedience.” American Psychologist, 19, 421–423.
Baumrind, D. (2014). Is Milgram’s deceptive research ethically acceptable? Theo-
retical and Applied Ethics, 2, 11–18.
Beauvois, J. L., Courbet, D., & Oberle, D. (2012). The prescriptive power of the
television host: A transposition of Milgram’s obedience paradigm to the
context of a TV game show. European Review of Applied Psychology, 62,
111–119.
Begue, L., Beauvois, J. L., Courbet, D., Oberle, D., Lepage, J., & Duke, A. A.
(2015). Personality predicts obedience in a Milgram paradigm. Journal of
Personality, 83(3), 299–306.
Benjamin, L. T., & Simpson, J. A. (2009). The power of the situation: The impact
of Milgram’s obedience studies on personality and social psychology. Ameri-
can Psychologist, 64, 12–19.
Blass, T. (Ed.). (2000). Obedience to authority: Current perspectives on the Mil-
gram paradigm. Mahwah, NJ: Erlbaum.
Blass, T. (2004). The man who shocked the world: The life and legacy of Stanley
Milgram. New York: Basic Books.
Blass, T. (2009). From New Haven to Santa Clara: A historical perspective on the
Milgram obedience experiments. American Psychologist, 64, 37–45.
Bocchiaro, P., & Zimbardo, P. G. (2010). Defying unjust authority: An exploratory
study. Current Psychology, 29, 155–170.
Bocchiaro, P., Zimbardo, P. G., & Van Lange, P. (2012). To defy or not to defy:
Unpacking disobedience and whistle-blowing as extraordinary forms of dis-
obedience to authority. Social Influence, 7, 35–50.
Brannigan, A., Nicholson, I., & Cherry, F. (Eds.) (2015). Unplugging the Milgram
machine. Theory and Psychology, 25, 551–696.
Browning, C. (1992). Ordinary men: Reserve police battalion 101 and the final
solution in Poland. London: Harper Collins.
Buckels, E. E., Jones, D. N., & Paulhus, D. L. (2013). Behavioral confirmation of
everyday sadism. Psychological Science, 20(10), 1–9.
Burger, J. M. (2009). Replicating Milgram: Would people still obey today? Ameri-
can Psychologist, 64, 1–11.
Burger, J. M. (2014). Situational features in Milgram’s experiment that kept his
participants shocking. Journal of Social Issues, 70(3), 489–500.
Burger, J. M., Girgis, Z. M., & Manning, C. C. (2011). In their own words:
Explaining obedience to authority through an examination of participants’
comments. Social Psychological and Personality Science, 2, 460–466.
Card, R. (2005). Individual responsibility within organizational contexts. Journal
of Business Ethics, 62, 397–405.
Carey, J. M., & Paulhus, D. L. (2013). Worldview implications of believing in free
will and/or determinism: Politics, morality, and punitiveness. Journal of Per-
sonality, 81(2), 130–141.
Carnahan, T., & McFarland, S. (2007). Revisiting the Stanford prison experiment:
218 HARMING OTHERS

Could participant self-selection have led to the cruelty? Personality and Social
Psychology Bulletin, 33(5), 603–614.
Caspar, E. A., Christensen, J. F., Cleeremans, A., & Haggard, P. (2016). Coer-
cion changes the sense of agency in the human brain. Current Biology, 26,
1–8.
Cesarani, D. (2004). Becoming Eichmann: Rethinking the life, crimes, and trial of
a “desk murderer.” Cambridge, MA: Da Capo Press.
Cheetham, M., Pedroni, A. F., Antley, A., Slater, M., & Jancke, L. (2009). Virtual
Milgram: Empathic concern or personal distress? Evidence from functional
MRI and dispositional measures. Human Neuroscience, 3, 29.
Ciurria, M. (2013). Situationism, moral responsibility, and blame. Philosophia,
41, 179–193.
Clark, C. J., Luguri, J. B., Ditto, P. H., Knobe, J., Shariff, A., & Baumeister, R.
F. (2014). Free to punish: A motivated account of free will belief. Journal of
Personality and Social Psychology, 16, 501–513.
Cooper, J. (2007). Cognitive dissonance: 50 years of a classic theory. New York:
Sage.
Dambrun, M., & Vatine, E. (2010). Reopening the study of extreme social behav-
iors: Obedience to authority within an immersive video environment. Euro-
pean Journal of Social Psychology, 40, 760–773.
Dargis, M. (2015, October 15). In “Experimenter,” are they following orders
or instincts? [Review]. New York Times. Retrieved from www.nytimes.
com/2015/10/16/movies/review-in-experimenter-are-they-following-orders-
or-instincts.html?ref=arts&_r=0.
de Swaan, A. (2015). The killing compartments: The mentality of mass murder.
New Haven, CT: Yale University Press.
Donnellan, M. B., Fraley, R. C., & Krueger, R. F. (2007). Not so situational [Letter
to the editor]. APS Observer, 20(6), 5.
Dovidio, J. F., Gaertner, S. L., Nier, J. A., Kawakami, K., & Hodson, G. (2004).
Contemporary racial bias: When good people do bad things. In A. G. Miller
(Ed.), The social psychology of good and evil (pp. 141–167). New York: Guil-
ford Press.
Dyson, F. (2015, May 7). Einstein as a Jew and a philosopher. The New York
Review of Books, pp. 14–17.
Einwohner, R. L. (2014). Authorities and uncertainties: Applying lessons from the
study of Jewish resistance during the Holocaust to the Milgram legacy. Jour-
nal of Social Issues, 70(3), 531–543.
Elms, A. C. (2009). Obedience lite. American Psychologist, 64, 32–36.
Elms, A. C., & Milgram, S. (1966). Personality characteristics associated with
obedience and defiance toward authoritative commands. Journal of Experi-
mental Research in Personality, 1, 282–289.
Epley, N. (2015). Mindwise: Why we misunderstand what others think, believe,
feel, and want. New York: Vintage.
Evans, R. J. (2015, July 9). The anatomy of hell. The New York Review of Books,
pp. 52–54.
Fermaglich, K. (2006). American dreams and Nazi nightmares: Early Holocaust
consciousness and liberal America, 1957–1965. Waltham, MA: Brandeis
University Press
The Milgram Obedience Experiments 219

Fiske, S. T. (2013). A millennial challenge: Extremism in uncertain times. Journal


of Social Issues, 69(3), 605–613.
Fiske, S. T., Harris, L. T., & Cuddy, A. J. C. (2004). Why ordinary people torture
enemy prisoners. Science, 306, 1482–1483.
Frimer, J. A., Gaucher, D., & Schaefer, N. K. (2014). Political conservatives’ affin-
ity for obedience to authority is loyal, not blind. Personality and Social Psy-
chology Bulletin, 40(9), 1205–1214.
Funder, D. C. (2009). Persons, behaviors, and situations: An agenda for personal-
ity psychology in the postwar era. Journal of Research in Personality, 43,
120–126.
Gazda, P. (2015, April 19). I was an animal experimenter. New York Times, p. 8.
Gibson, S. (2014). Discourse, defiance, and rationality: “Knowledge work” in the
“obedience” paradigm. Journal of Social Issues, 70(3), 424–438.
Gilovich, T., Keltner, D., & Nisbett, R. E. (2011). Social psychology (2nd ed.). New
York: Norton.
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011).
Mapping the moral domain. Journal of Personality and Social Psychology,
101, 366–385.
Griggs, R. A., & Whitehead, G. I., III. (2014). Coverage of the Stanford prison
experiment in introductory social psychology textbooks. Teaching of Psy-
chology, 41(4), 318–324.
Griggs, R. A., & Whitehead, G. I., III. (2015). Coverage of Milgram’s obedience
experiments in social psychology textbooks: Where have all the criticisms
gone? Teaching of Psychology, 42(4), 315–322.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and
religion. New York: Pantheon.
Haney, C., & Zimbardo, P. G. (2009). Persistent dispositionalism in interactionist
clothing: Fundamental attribution error in explaining prison abuse. Personal-
ity and Social Psychology Bulletin, 35, 807–814.
Hanson, J., & Yosifon, D. (2003). The situation: An introduction to the situational
character, critical realism, power economics, and deep capture. University of
Pennsylvania Law Review. 152, 129–346.
Hartson, K. A., & Sherman, D. K. (2012). Gradual escalation: The role of con-
tinuous commitments in perceptions of guilt. Journal of Experimental Social
Psychology, 48, 1279–1290.
Haslam, N., Loughnan S., & Perry G. (2014). Meta-Milgram: An empirical syn-
thesis of the obedience experiments. PLoS ONE, 9(4), e93927.
Haslam, S. A., Miller, A. G., & Reicher, S. D. (Eds.). (2014). Milgram at 50:
Exploring the enduring relevance of psychology’s most famous studies. Jour-
nal of Social Issues, 70(3), 393–602.
Haslam, S. A., & Reicher, S. D. (2006). Debating the psychology of tyranny: Fun-
damental issues of theory, perspectives, and science. British Journal of Social
Psychology, 45, 55–63.
Haslam, S. A., & Reicher, S. D. (2012a). Contesting the “nature” of conformity: What
Milgram and Zimbardo’s studies really show. PLoS Biology, 10(11), e1001426.
Haslam, S. A., & Reicher, S. D. (2012b). When prisoners take over the prison: A
social psychology of resistance. Personality and Social Psychology Review,
16(2), 154–179.
220 HARMING OTHERS

Haslam, S. A., Reicher, S. D., & Birney, M. E. (2014). Nothing by mere authority:
Evidence that in an experimental analogue of the Milgram paradigm, partici-
pants are motivated not by orders but by appeals to science. Journal of Social
Issues, 70(3), 473–488.
Haslam, S. A., Reicher, S. D., & Birney, M. E. (2016). Questioning authority: New
perspectives on Milgram’s “obedience” research and its implications for inter-
group relations. Current Opinion in Psychology, 11, 6–9.
Haslam, S. A., Reicher, S. D., & Millard, K. (2015a). Shock treatment: Using
immersive digital realism to restage and re-examine Milgram’s “obedience to
authority” research. PLoS ONE, 10(3), e109015.
Haslam, S. A., Reicher, S. D., Millard, K., & McDonald, R. (2015b). “Happy to
have been of service”: The Yale archive as a window into the engaged follow-
ership of participants in Milgram’s “obedience” experiments. British Journal
of Social Psychology, 54, 55–83.
Herbert, W. (2007). The banality of evil. APS Observer, 20(4), 11–12.
Herrera, C. (Ed.). (2013). Stanley Milgram and the ethics of social science research.
Theoretical and Applied Ethics, 2(2), 1–145.
Hollander, M. M. (2015). The repertoire of resistance: Non-compliance with direc-
tives in Milgram’s “obedience” experiments. British Journal of Social Psy-
chology, 54, 425–444.
Jetten, J., & Mols, F. (2014). 50:50 hindsight: Appreciating anew the contribu-
tions of Milgram’s obedience experiments. Journal of Social Issues, 70(3),
587–602.
Kassin, S., Fein, S., & Markus, H. R. (2014). Social psychology (9th ed.). Belmont,
CA: Wadsworth.
Kelman, H. C., & Hamilton, V. L. (1989). Crimes of obedience: Toward a social
psychology of authority and responsibility. New Haven, CT: Yale University
Press.
Kershner, I. (2016, January 28). Letter reveals plea for mercy by Eichmann. The
New York Times, pp. A1, A8.
Lane, A. (2014, February 10). The cost of survival: Review of “The last of the
unjust.” The New Yorker, pp. 84–85.
Lang, J. (2014). Against obedience: Hannah Arendt’s overlooked challenge to
social-psychological explanations of mass atrocity. Theory & Psychology,
24(5), 649–667.
Levy, K. (2015). Does situationism excuse? The implications of situationism for
moral responsibility and criminal responsibility. Arkansas Law Review, 68,
775–831.
Lilienfeld, S. O. (2012a). Public skepticism of psychology: Why many people per-
ceive the study of human behavior as unscientific. American Psychologist, 67,
111–129.
Lilienfeld, S. O. (2012b). Further sources of our field’s embattled public reputation.
American Psychologist, 67(9), 808–809.
Lipstadt, D. (2011). The Eichmann trial. New York: Shocken.
Lunt, P. (2009). Stanley Milgram: Understanding obedience and its implications.
New York: Palgrave Macmillan.
Mandel, D. (1998). The obedience alibi: Milgram’s account of the Holocaust recon-
sidered. Analyse und Kritik, 20, 74–94.
The Milgram Obedience Experiments 221

Martens, A., Kosloff, S., & Jackson, L. E. (2010). Evidence that initial obedient
killing fuels subsequent volitional killing beyond effects of practice. Social
Psychology and Personality Science, 1, 268–273.
Martin, R., Rothrock, N., Leventhal, H., & Leventhal, E. (2003). Common-sense
models of illness: Implications for symptom perception and health-related
behavior. In J. Sulls & K. A. Wallston (Eds.), Social psychological founda-
tions of health and illness (pp. 200–225). New York: Wiley.
McMillan, D. (2014). How could this happen: Explaining the Holocaust. New
York: Basic Books.
Meeus, W. H. J., & Raaijmakers, Q. A. W. (1986). Administrative obedience: Car-
rying out orders to use psychological administrative violence. European Jour-
nal of Social Psychology, 16(4), 311–324.
Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and
Social Psychology, 67, 371–378.
Milgram, S. (1964). Issues in the study of obedience: A reply to Baumrind. Ameri-
can Psychologist, 19, 848–852.
Milgram, S. (1974). Obedience to authority: An experimental view. New York:
Harper & Row.
Milgram, S. (1979). Obedience to authority: An experimental view [Preface] (2nd
French ed., p. 9). Paris: Calmann-Levy.
Millard, K. (2014). Revisioning obedience: Exploring the role of Milgram’s skills
as a filmmaker in bringing his shocking narrative to life. Journal of Social
Issues, 70(3), 439–453.
Miller, A. G. (2004). What can the Milgram experiments tell us about the Holo-
caust?: Generalizing from the social psychology laboratory. In A. G. Miller
(Ed.), The social psychology of good and evil (pp. 193–239). New York: Guil-
ford Press.
Miller, A. G. (2009). Reflections on “Replicating Milgram” (Burger, 2009). Amer-
ican Psychologist, 64(1), 20–27.
Miller, A. G. (2014). The explanatory value of Milgram’s obedience experiments:
A contemporary appraisal. Journal of Social Issues, 70(3), 558–573.
Miller, A. G., Gordon, A. K., & Buddie, A. M. (1999). Accounting for evil and
cruelty: Is to explain to condone? Personality and Social Psychology Review,
3, 254–268.
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38,
379–387.
Moore, C., & Gino, F. (2013). Ethically adrift: How others pull our moral compass
from true North, and how we can fix it. Research in Organizational Behav-
ior, 33, 53–77.
Newman, L. S. (2001). The banality of secondary sources: Why social psycholo-
gists have misinterpreted Arendt’s thesis. Unpublished manuscript, Depart-
ment of Psychology, Syracuse University, NY.
Newman, L. S., & Bakina, D. A. (2009). Do people resist social-psychological per-
spectives on wrongdoing?: Reactions to dispositional, situational, and inter-
actionist explanations. Social Influence, 4(4), 256–273.
Newman, L. S., Bakina, D. A., & Tang, Y. (2012). The role of preferred beliefs in
skepticism about psychology. American Psychologist, 67(9), 805–806.
Nicholson, I. (2011). “Torture at Yale”: Experimental subjects, laboratory torment
222 HARMING OTHERS

and the “rehabilitation” of Milgram’s “Obedience to authority.” Theory &


Psychology, 21(6), 737–761.
Overy, R. (2014). “Ordinary men,” extraordinary circumstances: Historians, social
psychology, and the Holocaust. Journal of Social Issues, 70(3), 515–530.
Packer, D. J. (2008). Identifying systematic disobedience in Milgram’s obedience
experiments: A meta-analytic review. Perspectives on Psychological Science,
3, 301–304.
Perlman, A. M. (2007). Unethical obedience by subordinate attorneys: Lessons
from social psychology. Hofstra Law Review, 36(2), 451–477.
Perry, G. (2013). Behind the shock machine: The untold story of the notorious
Milgram psychology experiments. New York: New Press.
Reicher, S. D., & Haslam, S. A. (2006). Rethinking the psychology of tyranny: The
BBC prison study. British Journal of Social Psychology, 45, 1–40.
Reicher, S. D., & Haslam, S. A. (2011). After shock?: Towards a social identity
explanation of the Milgram “obedience” studies. British Journal of Social
Psychology, 50, 163–169.
Reicher, S. D., & Haslam, S. A. (2012). Obedience: Revisiting Milgram’s shock
experiments. In J. R. Smith & S. A. Haslam (Eds.), Social psychology: Revis-
iting the classic studies (pp. 106–125). Thousand Oaks, CA: Sage.
Reicher, S. D., Haslam, S. A., & Miller, A. G. (2014). What makes a person a per-
petrator?: The intellectual, moral, and methodological arguments for revisit-
ing Milgram’s research on the influence of authority. Journal of Social Issues,
70(3), 393–408.
Reicher, S. D., Haslam, S. A., & Smith, J. R. (2012). Working toward the experi-
menter: Reconceptualizing obedience within the Milgram paradigm as iden-
tification-based followership. Perspectives on Psychological Science, 7(4),
315–324.
Rochat, F., & Blass, T. (2014). Milgram’s unpublished obedience variation and its
historical relevance. Journal of Social Issues, 70(3), 456–472.
Ross, L., Lepper, M., & Ward, A. (2010). History of social psychology: Insights,
challenges, and contributions to theory and application. In S. T. Fiske, D. T.
Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., Vol. 2,
pp. 3–50). New York: Wiley.
Ross, L., & Shestowsky, D. (2012). Two social psychologists’ reflections on situ-
ationism and the criminal justice system. In J. Hanson (Ed.), Ideology, psy-
chology, and the law (pp. 612–649). New York: Oxford University Press.
Russell, N. J. (2011). Milgram’s obedience to authority experiments: Origins and
early evolution. British Journal of Social Psychology, 50, 140–162.
Russell, N. J. (2014). The emergence of Milgram’s bureaucratic machine. Journal
of Social Issues, 70(3), 409–423.
Shalvi, S., Gino, F., Barkan, R., & Ayal, S. (2015). Self-serving justifications: Doing
wrong and feeling moral. Current Directions in Psychological Science, 24(2),
125–130.
Shariff, A. F., Greene, J. D., Karremans, J. C., Luguri, J. B., Clark, C. J., Schooler,
J. W., et al. (2014). Free will and punishment: A mechanistic view of human
nature reduces retribution. Psychological Science, 25(8), 1563–1570.
Sieber, J. E., & Tolich, M. B. (2013). Planning ethically responsible research. Los
Angeles: Sage.
The Milgram Obedience Experiments 223

Smith, E. R., Mackie, D. M., & Claypool, H. M. (2015). Social psychology (4th
ed.). New York: Psychology Press.
Smith, J. R., & Haslam, S. A. (2012). Social psychology: Revisiting the classic
studies. Los Angeles: Sage.
Stangneth, B. (2014). Eichmann before Jerusalem: The unexamined life of a mass
murderer. New York: Knopf.
Stargardt, N. (2015). The German war: A nation under arms, 1939–1945: Citi-
zens and soldiers. New York: Basic Books.
Staub, E. (2014). Obeying, joining, following, resisting, and other processes in the
Milgram studies, and in the Holocaust and other genocides: Situations, per-
sonality, and bystanders. Journal of Social Issues, 70(3), 501–514.
Tang, Y., Newman, L. S., & Huang, L. (2014). How people react to social-psycho-
logical accounts of wrongdoing: The moderating effects of culture. Journal of
Cross-Cultural Psychology, 45(5), 752–763.
Tavris, C. (2014). Teaching contentious classics. APS Observer, 27(8), 12–16.
Tetlock, P. E., Self, W. T., & Singh, R. (2010). The punitive paradox: When is
external pressure exculpatory and when a signal just to spread blame? Journal
of Experimental Social Psychology, 46, 388–395.
Valentino, B. A. (2004). Final solutions: Mass killing and genocide in the 20th
century. Ithaca, NY: Cornell University Press.
Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will: Encour-
aging a belief in determinism increases cheating. Psychological Science, 19(1),
49–54.
Waller, J. (2007). Becoming evil: How ordinary people commit genocide and mass
killing (2nd ed.). New York: Oxford University Press.
Werhane, P. H., Hartman, L. P., Archer, C., Englehardt, E. E., & Pritchard, M. S.
(2013). Obstacles to ethical decision-making: Mental models, Milgram, and
the problem of obedience. New York: Cambridge University Press.
Wolfe, A. (2011). Political evil: What it is and how to combat it. New York: Knopf.
Zeigler-Hill, V., Southard, A. C., Archer, L. M., & Donohoe, P. L. (2013). Neu-
roticism and negative affect influence the reluctance to engage in destructive
obedience in the Milgram paradigm. Journal of Social Psychology, 153(2),
161–174.
Zimbardo, P. G. (2006). On rethinking the psychology of tyranny: The BBC prison
study. British Journal of Social Psychology, 45, 47–53.
Zimbardo, P. G. (2007a). The Lucifer effect: Understanding how good people turn
evil. New York: Random House.
Zimbardo, P. G. (2007b). Person × situation × system dynamics [Letter to the edi-
tor]. APS Observer, 20(8), 6.

Copyright © 2016 The Guilford Press. Guilford Publications


No part of this text may be reproduced, translated, stored in a retrieval 370 Seventh Avenue
system, or transmitted in any form or by any means, electronic, mechanical, New York, NY 10001
photocopying, microfilming, recording, or otherwise, without written permission 212-431-9800
from the publisher. 800-365-7006
Purchase this book now: www.guilford.com/p/miller6 www.guilford.com

You might also like