File
File
National Intelligence
and Science
Beyond the Great Divide in Analysis and Policy
W I L H E L M AGR E L L
and
G R E G O R Y F. T R E V E R T O N
1
1
Oxford University Press is a department of the University of
Oxford. It furthers the University’s objective of excellence in research,
scholarship, and education by publishing worldwide.
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
1 3 5 7 9 8 6 4 2
Printed in the United States of America
on acid-free paper
CONTENTS
Figures vii
Tables ix
Preface xi
Bibliography 199
Index 213
v
FIGU R ES
vii
TA BL E S
ix
PR E FAC E
xi
xii Preface
Introduction
The Odd Twins of Uncertainty
On December 7, 2012, seven of the Nobel laureates of the year gathered in the
Bernadotte library at the Royal Castle in Stockholm for the recording of the
annual TV show Nobel Minds.1 Zeinab Badawi, a British journalist, started
the discussion by asking the distinguished scientists for their views about the
role of science in contemporary societies. Did science matter only if it deliv-
ered practical applications? Most of the scientists, while acknowledging the
tremendous role of science in social and economic progress, answered no.
Science, as they saw it, was mainly curiosity driven, and if the outcome hap-
pened to be useful, so much the better. Commercial interests and prospects for
commercialization, while increasingly powerful drivers in science policy and
research financing, were nevertheless not on the scientists’ minds. Rather, the
principal force for them was the prospect of investigating interesting prob-
lems. There was an inevitable gap, though, between the long-term perspec-
tives of science and the short-term perspectives of politics. That long-term
perspective is imperative for science, but it is also important for politicians
who have to think about the next election; they would be better off with more
scientific competence and ability to comprehend scientific thinking.
However, what then about public expectations of science to deliver definite
answers? Medicine laureate Sir John B. Gurdon compared the roles of science
and weather forecasting, the latter often criticized for being in error. Even so,
we are better off with sometimes inaccurate forecasts than with no forecasts
at all; “people should not blame them [weather forecasters] for not getting it
exactly right but be grateful for what they do.”2 Zeinab Badawi then raised the
1
The program is available at http://www.nobelprize.org/. The seven participating laure-
ates were Sir John Gurdon (medicine), Serge Haroche (physics), Brian Kobilka (chemistry),
Robert Lefkowitz (chemistry), Alvin Roth (economics), David Wineland (physics), and Shinya
Yamanaka (medicine).
2
Nobel Minds, SVT Dec. 7, 2012.
1
2 National Intelligence and Science
Intelligence Minds?
There is no known case of a corresponding round table event involving the
world’s leading intelligence analysts, and it is hard to imagine one within the
foreseeable future, given the reluctance of the vast majority of intelligence
organizations around the world to discuss intelligence matters publicly. But for
a moment let’s imagine Ms. Badawi chairing a similar round table—not Nobel
Minds but Intelligence Minds: What themes could be discussed there? Where
would the participants be likely agree or disagree? Probably the participants
would all agree not only on a continued need but also on an increased need for
intelligence in an age of globalized threats and growing vulnerabilities. They
would also be likely to agree on increasing uncertainty, partly due to the com-
plex and fragmented nature of the threats, but also due to a limited and possi-
bly decreasing ability to produce actionable forecasts. Some of the participants
would perhaps speak in favor of increased openness on intelligence matters,
not in operational details, but to enhance public understanding of and hence
Introduc tion 3
Mission
Use
of
intelligence
Mission
Processing Direction
Mission
of of the
information collection
effort
Collection
of
information
M is s io n
Figure 1.1 Early Depiction of Intelligence Cycle. Source: U.S. Army, 1948.
M. Herman, Intelligence Power in Peace and War (Cambridge: Cambridge University Press,
4
1996), p. 283.
Introduc tion 5
5
Gregory F. Treverton, Reshaping National Intelligence for an Age of Information
(New York: Cambridge University Press, 2003); Gregory F. Treverton, Intelligence for an Age
of Terror (New York: Cambridge University Press, 2009); Gregory F. Treverton and Wilhelm
Agrell, National Intelligence Systems (New York: Cambridge University Press, 2009).
6
The term was originally used by John Prados in The Soviet Estimate. U.S. Intelligence Analysis &
Russian Military Strength (New York: Dial Press, 1982). Practically all Western intelligence services
struggled with their own variants of this estimate throughout the Cold War.
6 National Intelligence and Science
worlds, divided not only by legal frameworks and professional traditions, but
also by values and mutual mistrust. As shown during the Cold War, bridging
this divide was far more difficult than further widening it. And yet, university
researchers and intelligence analysts tend to work on similar problems, some-
times also with similar materials and methods for data collection.
Since the 1990s, the traditional concept of empirically based distinctly
in-house intelligence production has been challenged more and more. The
technically based systems were not designed for and thus are unable or inade-
quate to cope with a wider spectrum of societal risks and emerging threats, not
necessarily resembling the classical military challenges of the bipolar security
system. Furthermore, the focus on data left intelligence with a limited capa-
bility to cope with complex risk and threat-level assessments. The inability of
post–Cold War intelligence to deliver accurate, timely, and verifiable assess-
ments has been displayed time and again. Why intelligence fails to estimate
risks and frame uncertainties, to produce the kind of actionable knowledge
demanded by policymakers, state institutions, the international community
and not least the public, has become a key issue in the literature on the chal-
lenges facing intelligence.7 While mostly elaborated from an Anglo-Saxon per-
spective, the phenomenon as such appears to be universal. 8 Across the Middle
East, the Arab Spring 2011 obviously took most regimes and their respective
intelligence and internal security apparatuses by surprise, as was the case with
international partners and observers.9
The point of departure for this book is the observation of two simultane-
ous, and possibly converging, trends with a common denominator in the rise
of complex societal risks with a high degree of uncertainty. While the Soviet
estimate appeared to become less uncertain with increased resolution, the
opposite seems to be the case with complex risks: as more data becomes avail-
able, the more the web of interlinked uncertainties on possible connections
and interpretations tends to grow.
7
For some of the key scholarly works on this theme, see Richard K. Betts, Enemies of
Intelligence: Knowledge and Power in American National Security (New York: Columbia University
Press, 2007); Phillip H. J. Davies, “Intelligence Culture and Intelligence Failure in Britain and
the United States,” Cambridge Review of International Affairs 17, no. 3 (2004): 495–520; Robert
Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War (Ithaca,
NY: Cornell University Press, 2010).
8
Several international examples of factors limiting the performance of intelligence organiza-
tions can be found in Phillip H. J. Davies and Kristian C. Gustafson, Intelligence Elsewhere: Spies
and Espionage Outside the Anglosphere (Washington, DC: Georgetown University Press, 2013).
9
For the Israeli inability to predict the upheaval, see Eyal Pascovich, “Intelligence Assessment
Regarding Social Developments: The Israeli Experience,” International Journal of Intelligence and
CounterIntelligence 26, no. 1 (2013): 84–114.
Introduc tion 7
The first trend is an increasing demand for analytic skill, lessons learned,
and the long awaited transformation of intelligence into a more “scientific”
form.10 The need was already recognized early in the Cold War, but at that
time the thought was that intelligence analysis should develop into a tradi-
tional positivist social science discipline.11 However, half a century later, the
epistemological basis for intelligence assessments tends to consist of a rather
unsophisticated mixture of common sense, brainstorming, and established
practice within a closed profession—and as such not comprehensible to
outsiders.12
Faced with an increasing complexity in the targets and a surge in demand
for detailed, timely, and accurate assessments of a wide range of external and/
or internal threats to societies, intelligence analysis is under enormous pres-
sure to transform from a state of proto-science in order to deliver. The poten-
tially devastating social consequences of performance failures underscore the
demand for methods to handle uncertainty and to validate assessments. In
order to achieve this, intelligence structures are forced to move away from the
inherited intelligence culture so as to cooperate between themselves as well as
within themselves, a kind of inter-intelligence and trans-intelligence similar to
the inter-disciplinary and trans-disciplinary approaches of the scientific com-
munity, which is experiencing many of the same challenges.
The second trend is the consequences of challenges, performance pres-
sure, and public expectations of policy-relevant science, leading to a rapid
transformation of focus of both the scientific research and the academic
institutions: from disciplinary research for the sake of knowledge production
to an emphasis on multi-disciplinary approaches. Scientific knowledge pro-
duction in the twentieth century was characterized by disciplinary special-
ization and fragmentation, in much the same way as intelligence under the
Cold War paradigm. With increasing and more complex external demands,
emerging research problems with high relevance for the society had to be
met with a new structure, one that not only affected how research was orga-
nized but also the mode of knowledge production. In fields like urban stud-
ies, health, environment, natural resources, and climate change, the dividing
line between natural and social sciences has to be crossed, and researchers
are forced to draw conclusions and supply scientific advice under increasing
10
See Peter Gill, Stephen Marrin, and Mark Phythian, Intelligence Theory: Key Questions and
Debates (London: Routledge, 2009); Stephen Marrin, Improving Intelligence Analysis: Bridging
the Gap between Scholarship and Practice (London: Routledge, 2011).
11
Sherman Kent, “The Need for an Intelligence Literature,” Studies in Intelligence 1, no. 1
(1955).
12
R ob Johnston, “Analytic Culture in the US Intelligence Community: An Ethnographic
Study” (Washington, DC: Center for Study of Intelligence,Central Intelligence Agency, 2005).
8 National Intelligence and Science
13
For a discussion of “preventive science,” see Mark Phythian, “Policing Uncertainty:
Intelligence, Security and Risk,” Intelligence and National Security 27, no. 2 (2012): 187–205.
14
A mong the studied cases are the radioactive fallout from the Chernobyl disaster in 1986
and the BSE disease in Britain in the 1980s and 90s. See Angela Liberatore, The Management
of Uncertainty: Learning from Chernobyl (Amsterdam: Gordon & Breach, 1999), and Tomas
Hellström and Merle Jacob, Policy Uncertainty and Risk: Conceptual Developments and Approaches
(Dordrecht: Kluwer 2001).
15
For an overview, see Gudrun Persson, Fusion Centres—Lessons Learned. A Study of
Coordination Functions for Intelligence and Security Services (Stockholm: Center for Asymmetric
Threat Studies, Swedish National Defence College, 2013).
Introduc tion 9
1
Nils Petter Gleditsch and Owen Wilkes, Intelligence Installations in Norway: Their Number,
Location, Function, and Legality (Oslo: Peace Research Institute of Oslo, 1979). A slightly
revised version in Norwegian was published as c hapter 1 of Owen Wilkes and Nils Petter
Gleditsch, Onkel Sams kaniner. Teknisk etterretning i Norge (Uncle Sam’s Rabbits. Technical
Intelligence in Norway) (Oslo: PAX, 1981).
2
Nils Petter Gleditsch et al., Norge i atomstrategien. Atompolitikk, alliansepolitikk, basepolitikk
(Norway in the Nuclear Strategy. Nuclear Policy, Alliance Policy, Base Policy) (Oslo: PAX, 1978);
11
12 National Intelligence and Science
of interest in the 1970s, answering to a wider public concern over the risk of
nuclear war. 3 The peace research community shared a sense of urgency with a
growing peace movement and saw the role of the researcher as not constrained
to the academic system and its disciplinary orientation, but turned outward
as a kind of counter-expertise on security issues to what it perceived as a
monopoly by the political, military, and arms industry establishments. While
not explicitly referring to this context in their report, the authors in follow-up
publications listed the major policy issues for which they saw their findings
as relevant—for instance, whether the intelligence installations added to or
detracted from Norwegian security.4
Given this context, the report aroused considerable media attention and
soon became the subject of a heated public debate, not so much over the find-
ings as such but the methods employed by the two researchers. By providing
detailed descriptions of what were supposed to be secret defense installations,
they had, according to the critics, revealed sensitive information and in prac-
tice done the job of foreign intelligence organizations by conducting clandes-
tine intelligence collection under the disguise of academic research.
The authors, however, maintained that academic research was precisely
what they had done. In the report they had, according to normal scientific
standard, described their sources and methods for data collection. Nothing,
they maintained, had been acquired by illegal or even ethically doubtful
methods. All they had done was to apply basic methods of social science and
use open sources and openly available technical descriptions. 5 One of their
main sources was in fact the Norwegian telephone directory, where all defense
on the Nordic and Western European peace movements, see European Peace Movements and the
Future of the Western Alliance, ed. Walter Laqueur and R. E. Hunter (New Brunswick: Transaction
Books, 1985).
3
Beginning in the mid-1970s, the Stockholm International Peace Research Institute (SIPRI)
published a large number of books and research reports on the nuclear arms race. See Frank
Barnaby and Ronald Huisken, Arms Uncontrolled (Cambridge, MA: Harvard University Press,
1975). A more activist approach was taken by other researchers moving closer to the popular
movement, as illustrated by a reader that was widely circulated and commented on, Protest and
Survive, ed. Dan Smith and Edward Palmer Thompson (Harmondsworth: Penguin, 1980).
4
O wen Wilkes and Nils Petter Gleditsch. “Research on Intelligence or Intelligence as
Research,” in Egbert Jahn and Yoshikazu Sakamoto, eds. Elements of World Instability: Armaments,
Communication, Food, International Division of Labor, Proceedings of the Eighth International Peace
Research Association General Conference (Frankfurt/New York: Campus, 1981). Norwegian ver-
sion: Forskning om etterretning eller etterretning som forskning (Oslo: PRIO, 1979; expanded ver-
sion of Wilkes and Gleditsch as ch. 2).
5
Gleditsch and Wilkes, Forskning om etterretning eller etterretning som forskning; Gleditsch
and Wilkes, "Research on Intelligence and Intelligence as Research." Incidentally, the backgound
of the original report was another ongoing court case regarding alleged revelation of defense
secrets. One purpose of the Gleditsch/Wilkes report was to display just how much could be
deduced from open sources.
F ra m i n g t h e D i v i d e 13
6
Gleditsch and Wilkes, Intelligence Installations in Norway: Their Number, Location, Function,
and Legality, p. 11.
7
Gleditsch and Wilkes, Forskning om etterretning eller etterretning som forskning, pp. 1–2. The
section on the parallels between intelligence and research appears only in the Norwegian version.
8
See Forskning eller spionasje. Rapport om straffesaken i Oslo Byrett i mai 1981 (Research
or Espionage. Report on the Criminal Trial in Oslo Town Court in May 1981) (Oslo: PRIO,
1981) for the verdict and a complete record of the proceedings. A shorter version in English
appears as The Oslo Rabbit Trial. A Record of the “National Security Trial” against Owen Wilkes and
Nils Petter Gleditsch in the Oslo Town Court, May 1981 (Oslo: PRIO, 1981). See also, Round Two.
The Norwegian Supreme Court vs. Gleditsch & Wilkes, (Oslo: PRIO, 1982).
14 National Intelligence and Science
of espionage but by the nature of the aggregated output of the data collection.
The verdict broadened the whole issue from the subject of the original report
to a conflict between the principles of academic freedom and the protection of
national security. Leading academics, even those who did not necessarily sym-
pathize with the researchers, nevertheless came out strongly critical against
the verdict and the very idea that the employment of open sources could con-
stitute a crime.9 The two domains seemed, not only in the isolated Norwegian
case but throughout the Western world, very far apart, with incompatible val-
ues, professional ethos, and perception of their roles in society. Yet the relation
had not started that way. Indeed the two worlds had, under external pressure,
found each other and developed not only coexistence but also far-reaching
cooperation. Bridges had been laid, only to be later burned to the ground.
"Dommen over Gleditsch Og Wilkes. Fire Kritiske Innlegg" (Oslo: Peace Research Institute
9
of Oslo, 1981). For a discussion of the wider implications of the Norwegian case, see Nils Petter
Gleditsch and Einar Høgetveit, “Freedom of Information and National Security. A Comparative
Study of Norway and United States,” Journal of Peace Research 2, no. 1 (1984): 17–45, and Nils
Petter Gleditsch, “Freedom of Expression, Freedom of Information, and National Security: The
Case of Norway,” in Sandra Coliver et al., eds. Secrecy and Liberty: National Security, Freedom of
Expression and Access to Information (Haag: Nijhoff 1999), pp. 361–388.
10
Max Ronge, Kriegs-Und Industrie-Spionage: Zwölf Jahre Kundschaftsdienst (Wien: Amalthea-Verl,
1930).
F ra m i n g t h e D i v i d e 15
11
On the role of Haber, see Dietrich Stoltzenberg, Fritz Haber. Chemiker, Nobelpreisträger,
Deutscher Jude (Weinhem: VCH, 1994).
12
Jonathan Shimshoni, “Technology, Military Advantage, and World War I: A Case for
Military Entrepreneurship,” International Security 15, no. 3 (1990): 187–215.
13
Guy Hartcup, The War of Invention: Scientific Developments, 1914–18 (London: Brassey's
Defence Publishers 1988).
14
Jean Jacques Salomon, Science and Politics (Cambridge, MA: MIT Press, 1973), p. 31.
15
For the role of sociology, see Peter Buck, "Adjusting to Military Life: The Social Sciences Go
to War 1941–1950," in Military Enterprise and Technological Change: Perspectives on the American
Experience (Cambridge, MA: MIT Press, 1985).
16 National Intelligence and Science
16
The process of military bureaucratization and the friction it created is a lead theme in the
literature on the Manhattan project. See Robert Jungk, Brighter Than a Thousand Suns: A Personal
History of the Atomic Scientists (Harmondsworth: Penguin Books, 1982); Richard Rhodes, The
Making of the Atomic Bomb (New York: Simon and Schuster, 1986); Silvan S. Schweber, "In
the Shadow of the Bomb: Oppenheimer, Bethe, and the Moral Responsibility of the Scientist"
(Princeton, NJ: Princeton University Press, 2000). On the parallel German case, see David
C. Cassidy, Uncertainty: The Life and Science of Werner Heisenberg (New York: W. H. Freeman,
1992); Paul Lawrence Rose, Heisenberg and the Nazi Atomic Bomb Project: A Study in German
Culture (Berkeley: University of California Press, 1998).
17
The wide employment of radar led to new scientific demands. The performance of radar
stations was calculated according to normal conditions, while the actual performance could vary
according to different atmospheric conditions. In 1943, British and American scientists set up a
joint committee to pool their knowledge of the phenomenon called propagation and to conduct
further theoretical and experimental research to provide the armed forces with actionable propa-
gation forecasts. See C. G Suits, George R. Harrison, and Louis Jordan, Science in World War II.
Applied Physics, Electronics, Optics, Metallurgy (Boston: Little, Brown, 1948).
18
Francis H. Hinsley and Alan Stripp, Codebreakers: The Inside Story of Bletchley Park
(Oxford: Oxford University Press, 2001).
19
On the breaking of the Geheimschreiber machine-crypto, see C. G. McKay and Bengt
Beckman, Swedish Signal Intelligence: 1900–1945 (London: Frank Cass, 2003).
F ra m i n g t h e D i v i d e 17
methods were employed to solve an intelligence task. The prime mover was,
as with all wartime research efforts, necessity; code breaking could not be
accomplished on a wider scale or against high-grade systems without this
influx from mainly the academic world, an influx that also included the non-
conformity described at Bletchley Park—incidentally, the same phenome-
non described in numerous accounts from the Manhattan project. Given this
exclusive expertise, signals intelligence could not easily be subjected to the
conformism of military bureaucracy and could retain an element of auton-
omy, as independent organizations or as a semi-autonomous profession.
Yet the mobilization of intellectual and institutional resources for military
research and development also in itself constituted an emerging intelligence
requirement. Academics were needed to monitor the work of colleagues on
the other side. Shortly before the war, the young Briton with a doctorate in
physics from Oxford, R. V. Jones, was approached by a staff member of Sir
Henry Tizard’s Committee for the Scientific Survey of Air Defence. Britain
was, in this period, ahead of most other nations in integrating the academic
scientific community and defense research and development, as illustrated
by the lead in radar technology accomplished by the Royal Air Force (RAF)
over the main adversary, Germany. However, the Committee had, as war drew
closer, experienced a problem regarding intelligence, or rather lack of intel-
ligence: the British intelligence services simply did not provide material that
gave any insights into German efforts to apply science in aerial warfare.20 Jones
was offered the task of closing this gap and came to head one of the first forays
in scientific intelligence. In the end, he found that task less challenging than
scientific research: after all, the only thing he had to do was to figure out things
that others already had discovered and thus by definition were achievable.21 To
some extent the growing use of science in warfare and in intelligence was, as
Jones discovered, simply two sides of the same coin: scientific development of
the means of warfare created the need for countermeasures, impossible with-
out intelligence coverage of developments on the other side of the hill, an inter-
action that would continue and suffuse much of Cold War intelligence.
Scientific intelligence and technological intelligence were only two of a wide
range of tasks that were rapidly expanding and transforming intelligence orga-
nizations. Foreign intelligence, cryptography, special operations, deception,
and psychological warfare were tasks that not only demanded new agencies but
also human resources with new competences. However, recruitment from the
universities was more than adding specific competences to fields or problems
20
R. V. Jones, Most Secret War: British Scientific Intelligence 1939–45 (London: Coronet
Books, 1979), p. 1.
21
Ibid., p. 662.
18 National Intelligence and Science
that could not be handled without them. The new recruits joined intelligence
not only as specialists but also as intelligence officers, serving along with offi-
cers with a more traditional background in the military. One very obvious rea-
son for this kind of recruitment was the need for language skills in achieving a
range of intelligence tasks, from espionage, or human intelligence (HUMINT)
collection, to the study of open sources and the conduct of intelligence liaison
and covert operations. The wartime U.S. Office of Strategic Services (OSS), as
well as its British counterpart, the Secret Intelligence Service (SIS), or MI-6,
needed people who spoke German but who also understood German society,
culture, and mindset. The same requirement arose concerning Japan, Italy,
and the Axis-allied countries as well as countries under occupation.
This was not only a concern for the major powers and their intelligence
efforts. In Sweden, a secret intelligence service for HUMINT collection and
special operations was established in the autumn 1939. It was headed by a mili-
tary officer, but the majority of the small staff came from the universities. One
of the recruits was Thede Palm, a doctor of theology and fluent in German,
who was assigned the task of interrogating travelers arriving on the regular
ferry lines from Germany, screening them for any observation with intelli-
gence significance. Dr. Palm, as he was referred to, would take over as director
after the war and run secret intelligence for another twenty years.
Languages, however, like physics and mathematics, were only additional sup-
portive competences to the conduct of intelligence. But university graduates,
whether in economy, classics, or theology, also brought along something else—
a way to think, to deal with intellectual problems, and to structure information.
While not as immediately useful as the ability to read and speak German, this
more general intellectual competence had a significance that was soon obvious.
In its field operations, the OSS experienced the value of academic training, and
in his final report, Calvin B. Hoover, head of the OSS North Central European
Division, covering Germany, the Soviet Union and Scandinavia, noted that
from his experience, intelligence officers who lacked a university or college
background did not perform well out on mission, since these operators often
were left to solve or even formulate tasks on their own, without any detailed
guidance from remote headquarters and over unreliable and slow communica-
tions. These field officers needed the perspective gained by a theoretical educa-
tion to grasp the complexity of the conditions under which they had to operate
and the kind of information they needed, not to mention assessing the crucial
distinction between gossip and hearsay, on the one hand, and information that
could be verified and documented on the other.22
22
Calvin B. Hoover, "Final Report (No Date) Rg 226 (OSS), Entry 210 Box 436," (National
Archives, College Park, MD). As an example of inadequate educational background, Hoover
F ra m i n g t h e D i v i d e 19
refers to a report received by one of the OSS agents in Switzerland who in great excitement
reported that he had learned the secret by which the Germans produced gasoline from coal. But
instead of the complicated industrial process he simply described the principle of adding so many
atoms of hydrogen to so many atoms of carbon, well known to any chemical engineer. This par-
ticular report was actually disseminated and, as Hoover remarks, “aroused considerable amuse-
ment from some of the agencies which received it.” The Hoover example could be straight out
of Graham Greene’s, Our Man in Havana (London: Heineman, 1958), all the more so because
Greene’s protagonist, James Wormwold, sold vacuum cleaners in Cuba. To earn additional
money, he agreed to run spies for Britain and created an entirely fictitious network. At one point,
he sent pictures of vacuum cleaner parts to London, calling them sketches of a secret military
installation in the mountains.
23
For this report by world-leading social scientists, see Raffaele Laudani, ed., Secret Reports
on Nazi Germany: The Frankfurt School Contribution to the War Efforts. Franz Neumann, Herbert
Marcuse, Otto Kirchheimer (Princeton, NJ: Princeton University Press, 2013).
24
Petra Marquardt-Bigman, "The Research and Analysis Branch of the Office of Strategic
Services in the Debate of US Policies toward Germany, 1943–46," Intelligence and National
Security 12, no. 2 (1997): 91–100; Betty Abrahamsen Dessants, "Ambivalent Allies: OSS’ USSR
Division, the State Department, and the Bureaucracy of Intelligence Analysis, 1941–1945,"
Intelligence and National Security 11, no. 4 (1996): 722–753.
25
Barry Kātz, Foreign Intelligence: Research and Analysis in the Office of Strategic Services,
1942–1945 (Cambridge, MA: Harvard University Press, 1989).
20 National Intelligence and Science
Allen Dulles, The Craft of Intelligence (New York: Harper & Row 1963), p. 154.
26
Thede Palm, Några Studier Till T-Kontorets Historia, vol. 21, Kungl. Samgfundet for
27
28
Jones, Most Secret War: British Scientific Intelligence 1939–45, and R. V. Jones, Reflections
on Intelligence (London: Mandarin 1990). Intelligence veteran Michael Herman strongly dis-
agreed with Jones, arguing that Jones failed to take into account the realities of the Cold War
intelligence machinery, based as it was on efficient production lines, especially in the SIGINT
domain: Herman, Intelligence Power in Peace and War (Cambridge: Cambridge University Press,
1996). Also see R. V. Jones, Instruments and Experiences: Papers on Measurement and Instrument
Design (Hoboken, NJ: Wiley, 1988).
29
See Jeffrey T. Richelson, The Wizards of Langley. Inside the CIA’s Directorate of Science and
Technology (Boulder, CO: Westview, 2002).
30
Sherman Kent, Strategic Intelligence for American World Policy (Princeton, NJ: Princeton
University Press, 1949).
22 National Intelligence and Science
31
Sherman Kent, "The Need for an Intelligence Literature," Studies in Intelligence 1, no. 1
(1955), 1–8, available at https://www.cia.gov/library/center-for-the-study-of-intelligence/
csi-publications/book s-and-monographs/sherman-kent-and-the-board-of-nationa l-
estimates-collected-essays/2need.html.
32
The Swedish historian Stig Ekman was in 1974 assigned by a parliamentary commission to
write a report on the performance of the Swedish military intelligence during five crises during
the 1950s and ’60s. Ekman, who had been one of the senior historians in charge of a large-scale
research project on Sweden during the Second World War, wrote an extensive report only to have
it classified Top Secret; he was unable to retrieve his own manuscript for more than 20 years, and
then only in a heavily sanitized version, which he eventually published: Stig Ekman, Den Militära
Underrättelsetjänsten. Fem Kriser under Det Kalla Kriget (the Military Intelligence. Five Crises dur-
ing the Cold War) (Stockholm: Carlsson, 2000).
F ra m i n g t h e D i v i d e 23
33
Th is semi-academic character of the analytic profession in the US intelligence community
is well reflected in the ethnographer Rob Johnston’s report: Rob Johnston, "Analytic Culture in
the US Intelligence Community: An Ethnographic Study" (Washington, DC: Center for Study
of Intelligence,Central Intelligence Agency, 2005). A number of non-US examples of semi- or
non-academic intelligence cultures are given in Phillip H. J. Davies, and Kristian C. Gustafson,
Intelligence Elsewhere: Spies and Espionage Outside the Anglosphere (Washington, DC: Georgetown
University Press, 2013).
34
For further comments on perceptions of intelligence as profession, see Wilhelm Agrell,
"When Everything Is Intelligence, Nothing Is Intelligence," in Kent Center Occasional Papers
(Washington, DC: Central Intelligence Agency, 2003).
35
For a discussion of the scientific nature of medicine and the implications for intelligence,
see Walter Laqueur, A World of Secrets: The Uses and Limits of Intelligence (London: Weidenfeld
and Nicolson, 1985), p. 302. Also see Stephen Marrin and Jonathan D. Clemente, "Modeling
an Intelligence Analysis Profession on Medicine 1," International Journal of Intelligence and
CounterIntelligence 19, no. 4 (2006): 642–665.
24 National Intelligence and Science
The first and most important of the missing incentives is perhaps the
self-image of the intelligence profession. The craft or mystery conception is not
only a product of the absence of alternatives, of possible paths toward scientifi-
cally based analytic methods, but also a powerful instrument that intelligence
analysts and officials use to draw a sharp dividing line between insiders and
outsiders, those in the know and those not in the know and thus by defini-
tion unable to add something of substance. The impact of this for self-esteem,
professional identity, and policymaker access should not be underestimated.
The notion of a secret tradecraft is a powerful instrument for averting external
critics, a method that can be observed not only in intelligence but also in many
academic controversies, where interference by representatives from other dis-
ciplines often is far from well received, and almost by definition those who
interfere are regarded as ignorant and irrelevant. The transformation of intelli-
gence analysis toward overt, systematically employed, and verifiable methods
would not completely remove but would inevitably weaken the protective wall
surrounding craft and mystery. One of the most important aspects of the criti-
cal public debate since 2001 over the performance of intelligence has been the
penetration of this wall, and thus possibly the weakening of this major negative
incentive.
However, the unprecedented openness about analytic products and pro-
cesses around the Iraqi WMD case has also affected a second negative incen-
tive: the impact of secrecy. Secrecy, as a phenomenon in intelligence, is both
functional and dysfunctional—both an obvious necessity to protect sensitive
sources, methods, and valuable intellectual property and an element in the
intelligence mythology employed to shield organizations and activities and
to amplify the assumed significance of trivial information and flimsy assess-
ments. The extensive employment of secrecy has, intentionally or not, blocked
the intellectual development of intelligence analysis by drastically limiting
the empirical basis for any such process. True, there is a rapidly expanding
scholarly literature on the history of intelligence, based on a growing range
of documentary sources, but there is a vast time lag between the periods cov-
ered by the historians and the contemporary conduct of intelligence analysis.
Furthermore, the documentary material available to historians is often incom-
plete and in some cases misleading due to prevailing secrecy. 36
Intelligence analysis cannot abandon secrecy for the sake of methodol-
ogy. But secrecy can be employed in a more selective and “intelligent” way
if the empirical studies and methodological self-reflection are regarded not
as an external concern but as being in the interest of intelligence itself. Rob
See Richard J. Aldrich, "Policing the Past: Official History, Secrecy and British Intelligence
36
37
Johnston, "Analytic Culture in the US Intelligence Community."
26 National Intelligence and Science
38
Ulrich Beck, Risk Society: Towards a New Modernity (London: Sage, 1992). For a wider dis-
cussion on science and risks, see Maurie J. Cohen, Risk in the Modern Age: Social Theory, Science
and Environmental Decision-Making (Basingstoke: Palgrave, 2000).
39
M ichael Herman identifies four characteristics of people belonging to an intelligence cul-
ture, a sense of being different, of having a distinct mission, the multiplying effect of secrecy, and
finally that of mystery. Herman, Intelligence Power in Peace and War, pp. 327–329.
40
Vannevar Bush, "Science: The Endless Frontier," Transactions of the Kansas Academy of
Science (1903–), 48, no. 3 (1945).
41
For the development of postwar research policy in the OECD countries, see Salomon,
Science and Politics.
42
Michael Gibbons et al., The New Production of Knowledge: The Dynamics of Science and
Research in Contemporary Societies (London: Sage, 1994).
F ra m i n g t h e D i v i d e 27
while a new form of research is carried out in the context of applications. While
Mode 1 is characterized by epistemological homogeneity, Mode 2 is heteroge-
neous, trans-disciplinary, and employing a different type of quality control.
In a sense, Gibbons’s often-quoted model simply reflected the innovation in
research structure in American, British, and German wartime research. The
difference was that Mode 2 now had expanded beyond the realm of national
security and reflected the wider role of science heralded by the utopian endless
frontier and the dystopian risk society. In Mode 2 problems are defined in a
different way, one more inclusive to the practitioners and demanding a mul-
tidisciplinary approach.43 Fields such as environment, climate, or migration
are simply too broad and too complex for a single academic discipline. The
emerging process toward multi-disciplinary research, starting internally in the
academy in the 1960s, thus met an external demand-pull from the 1990s and
onward. The implication of Mode 2 is a transformation of research in terms of
problem definitions, ways to operate, and not least links with external inter-
ests, the stakeholders—a transformation that in many respects tends to make
research less different from intelligence. This convergence can be observed
especially where scientific expertise and research efforts are directly utilized
in addressing major risks in the society or are mobilized in crisis manage-
ment.44 We pursue these commonalities between intelligence and what might
be called “policy analysis 2.0” in more detail in Chapter 6.
43
Ibid., pp. 3–8.
44
One of very few discussions of the intelligence aspects of risk science working on the basis
of a preventive paradigm is Mark Phythian, “Policing Uncertainty: Intelligence, Security and
Risk,” Intelligence and National Security 27, no. 2 (2012): 187–205.
45
Graham T Allison, Essence of Decision: Explaining the Cuban Missile Crisis (Boston: Little,
Brown, 1971).
28 National Intelligence and Science
Graham T. Allison and P. Zelikow, Essence of Decision: Explaining the Cuban Missile Crisis
46
(New York: Longman, 1999); Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions
and Fiascoes (Boston: Houghton Mifflin, 1972).
F ra m i n g t h e D i v i d e 29
47
The Military Buildup in Cuba, SNIE 85-3-62, 19 September 1962, reproduced in part in
Mary S. McAuliffe, CIA Documents on the Cuban Missile Crisis, 1962 (Washington, DC: Central
Intelligence Agency, 1992).
48
The final twist to this misconception was supplied by Sherman Kent, the most senior ana-
lyst responsible for the drafting of the SNIE, who after the event argued that it was not the CIA
analysts who had been wrong but Nikita Khrushchev, since the former had foreseen disaster
for the Russians if they tried such a threatening move against the United States. See Raymond
L. Garthoff, "U.S. Intelligence in the Cuban Missile Crisis," in James G. Blight and David
A. Welch, eds. Intelligence and the Cuban Missile Crisis (London: Frank Cass, 1998).
49
Charles A. Duelfer, Comprehensive Report of the Special Advisor to the DCI on Iraq's
WMD. Washington, September 30, 2004, available at https://www.cia.gov/library/reports/
general-reports-1/iraq_wmd_2004/.
30 National Intelligence and Science
Ephraim Kam, Surprise Attack: The Victim's Perspective (Cambridge, MA: Harvard
50
much more like that single U-2 over-flight, guided by specific intelligence rel-
evance but also by the level of reliability needed to supply actionable answers.
From a research perspective the issues at stake for intelligence might appear
as uninteresting or irrelevant, and the methods employed as dubious. There
is, however a complication not only in the methods and the way studies are
designed. The major controversy is one of purpose, the use or possibly misuse
of research. 51 If research on intelligence might be provoking, conversely, so is
intelligence as research. To what extent do researchers want their methods and
findings to be employed for intelligence purposes? And how far can academic
research be pursued before colliding with secrecy and the interests of national
security? Here the issue at stake is not primarily one of differences in interpre-
tations, but one of research and intelligence ethics.
Intelligence analysis and research can thus, as those drafting or studying
intelligence assessments over and over again have observed, share methods and
be confronted with similar or even identical problems. But are they two sides of
the same coin? Or are they the same side of two different coins? To be more pre-
cise, is the divide about two different social contexts or two basically different
principles to construct knowledge, though sometimes with similar methods?
This highlights a fundamental underlying issue: whether intelligence analysis
and scientific research can be defined as variants of a single epistemology, or
whether they represent two separate epistemologies, not only in terms of orga-
nization and culture but also in terms of how knowledge is perceived and cre-
ated. Looking back on the divide that in many respects was a creation of the
Cold War security and intelligence universe, the answer would be the latter.
However, looking at more recent experiences, and the outlook for the future,
the answer would be a different one. What we are observing is something that
within the sociology of science is called epistemological drift where the two
coins are merging, not completely, but in this fundamental respect.
For a classical study on the issue, see Irving Louis Horowitz, ed., The Rise and Fall of Project
51
Camelot: Studies in the Relationship between Social Science and Practical Politics (Cambridge,
MA: MIT Press, 1967).
3
Given Cold War history and the sheer scale of the US intelligence enterprise,
the United States has cast a long shadow over the practices of its friends and
allies. This chapter explores what intelligence analysis is, and why, especially in
American practice, it has turned out as it has. Yet analysis is hardly singular; there
are many forms for many customers with many different needs. Moreover, try-
ing to understand analysis on its own is listening to one hand clapping: it cannot
be understood apart from what are still—and unhelpfully—called consumers
(or worse, customers). Nor can analysis be separated from collection. More than
a half century after publication, the writings of Sherman Kent and his critics are
vivid about the basics of US analysis and equally vivid about what might have
been. Intellectually, if perhaps not politically, intelligence analysis did not have
to distinguish sharply between “foreign” and “domestic,” with the trail of foreign
assessment stopping at the water’s edge. Nor did it have to give pride of place to
collection, with the first question asked about uncertain judgments being: can
we collect more? Nor did it have to separate intelligence more sharply than did
most countries from policy and politics lest intelligence become “politicized.”
What Is Analysis?
Start with the basics, the nature of the question or issue, and what are called—
not very helpfully as will be seen later—consumers. Table 3.1 lays out three
categories of questions, from puzzles, to mysteries, to complexities.1 When the
Soviet Union would collapse was a mystery, not a puzzle. No one could know
1
On the distinction between puzzles and mysteries, see Gregory F. Treverton, “Estimating
beyond the Cold War,” Defense Intelligence Journal 3, no. 2 (1994), and Joseph S. Nye Jr, “Peering
into the Future,” Foreign Affairs (July/August 1994): 82–93. For a popular version, see Gregory
F. Treverton, “Risks and Riddles: The Soviet Union Was a Puzzle. Al Qaeda Is a Mystery. Why We
Need to Know the Difference,” Smithsonian (June 2007).
32
W h a t I s A n a l y s i s? 33
the answer. It depended. It was contingent. Puzzles are a very different kind of
intelligence problem. They have an answer, but we may not know it. Many of the
intelligence successes of the Cold War were puzzle-solving about a very secre-
tive foe: were there Soviet missiles in Cuba? How many warheads did the Soviet
SS-18 missile carry?
Puzzles are not necessarily easier than mysteries: consider the decade required
to finally solve the puzzle of Osama bin Laden’s whereabouts. But they do come
with different expectations attached. Intelligence puzzles are not like jigsaw puz-
zles in that we almost certainly won’t have all the pieces and so will be unsure we
have the right answer. The US raid on Osama bin Laden in 2011 was launched,
according to participants in the decision, with odds no better than six in ten that
bin Laden actually was in the compound. But the fact that there is in principle an
answer provides some concreteness to what is expected of intelligence. By con-
trast, mysteries are those questions for which there is no certain answer. They are
iffy and contingent; the answer depends not least on the intervention, be it policy
or medical practice. Often, the experts—whether intelligence analysts, doctors,
or policy analysts—find themselves in the position of trying to frame and con-
vey essentially subjective judgments based on their expertise.
“Complexities,” by contrast, are mysteries-plus.2 Large numbers of relatively
small actors respond to a shifting set of situational factors. Thus, they do not
2
The term is from David Snowden, “Complex Acts of Knowing: Paradox and Descriptive
Self-Awareness,” Journal of Knowledge Management (2002) 6, no 2. His “known problems” are
like puzzles and his “knowable problems” akin to mysteries.
34 National Intelligence and Science
necessarily repeat in any established pattern and are not amenable to predictive
analysis in the same way as mysteries. Those characteristics describe many trans-
national targets, like terrorists—small groups forming and reforming, seeking to
find vulnerabilities, thus adapting constantly, and interacting in ways that may
be new. Complexities are sometimes called “wicked problems,” and one defini-
tion of those problems suggests the challenges for intelligence, and in particular
the “connectedness” of the threat with our own actions and vulnerabilities:
“Wicked problems are ill-defined, ambiguous and associated with strong
moral, political and professional issues. Since they are strongly stakeholder
dependent, there is often little consensus about what the problem is, let alone
how to resolve it. Furthermore, wicked problems won’t keep still: they are sets
of complex, interacting issues evolving in a dynamic social context. Often, new
forms of wicked problems emerge as a result of trying to understand and solve
one of them.”3
The second thing to notice about intelligence analysis is that it is plural. The
“analysis” done in translating a video image from a drone into target coordinates
that appear in a pilot’s cockpit—what the Pentagon calls DIMPIs (designated
mean points of impact, pronounced “dimpy”)—may be totally a processing
operation and all done automatically, without human hands (or brains) in the
process once the system is designed. At the other end of the spectrum, deep
mysteries, like charting Egypt’s future after the “Arab spring” in 2011, require
several kinds of human expertise and will be enriched by employing a variety
of analytic techniques. In between, there are in principle a multitude of needs
that consumers have for analysis. Table 3.2 identifies a dozen kinds of needs to
make that multitude manageable.
For each need, the table identifies whether the question at issue is a puzzle,
a mystery, or a complexity. It also hazards guesses for each, about how much
demand there will be from policy officials and how much time will be required
of those officials in framing and conveying the intelligence. For instance,
self-validating tactical information, those DIMPIs again, are a puzzle in high
demand from policy officials or operators and don’t require much time from
those officials. They are self-validating in the sense that the target either is or
isn’t where the coordinates say it should be, and exactly how the coordinates
were put together is of scant interest to the consumer.
By contrast, if the task asked of analysis was to assess the implications of vari-
ous policy choices, that process would be mystery-framing, not puzzle-solving.
It would take both time and candor by policy officials who would have to tell
analysts what alternatives were under consideration in a way that seldom
Tom Ritchey, Structuring Social Messes with Morphological Analysis (Stockholm: Swedish
3
happens. And it is perhaps not too cynical to observe that policy officials are
more likely to want this sort of analysis if they think it will support their pre-
ferred position. Trying to make sense of complexities probably requires even
more policymaker time, for it should be done jointly, perhaps in a table-top
exercise where all hypotheses and all questions are in order given the shapeless-
ness of the issue.
36 National Intelligence and Science
Conveying Uncertainty
The complexities example drives home the point that uncertainty cannot be
eliminated, only assessed and then perhaps managed. That is more and more
obvious when the analytic task moves away from warning, especially very
tactical warning, toward dealing with more strategic and forward-looking
mysteries, one for which the analysis begins where the information ends
and uncertainty is inescapable. In framing this task, it is useful to com-
pare Carl von Clausewitz with his lesser-known contemporary strategist,
Antoine-Henri, Baron de Jomini.4 Jomini, a true child of the Enlightenment,
saw strategy as a series of problems with definite solutions. He believed that
mathematical logic could derive “fundamental principles” of strategy, which
if followed should mean for the sovereign that “nothing very unexpected can
befall him and cause his ruin.”5 By contrast, Clausewitz believed that unpre-
dictable events were inevitable in war, and that combat involved some irre-
ducible uncertainty (or “friction”). He characterized war as involving “an
interplay of possibilities, probabilities, good luck and bad,” and argued that
“in the whole range of human activities, war most closely resembles a game
of cards.”6
Intelligence, perhaps especially in the United States, talks in Clausewitzian
terms, arguing that uncertainty, hence risk, can only be managed, not elimi-
nated. Yet the shadow of Jomini is a long one over both war and intelligence.
In fact, intelligence is still non-Clauswitzian in implying that uncertainty
can be reduced, perhaps eliminated. That theme runs back to Roberta
Wohlstetter’s classic book about Pearl Harbor, which paints a picture of
“systemic malfunctions.” 7 There were plenty of indications of an impending
attack, but a combination of secrecy procedures and separated organizations
kept them from being put together into a clear warning. If the dots had been
connected, to use a recently much-overused phrase, the attack could have
been predicted. So, too, the United States report on the terrorist attacks on
9/11 imposes a kind of Wohlstetter template, searching for signals that were
4
Th is discussion owes much to Jeffrey A. Friedman and Richard Zeckhauser, “Assessing
Uncertainty in Intelligence,” Intelligence and National Security 27, no. 6 (2012): 824–847.
5
A ntoine Henri Baron de Jomini, The Art of War, trans. G. H. Mendell and W. P. Craighill
(Mineola, NY: Dover, 2007), p.250.
6
C. Von Clausewitz, On War, trans. Michael Howard and P. Paret (Princeton, NJ: Princeton
University Press, 1976). For a nice comparison of Clausewitz and Jomini, see Mark T. Calhoun,
“Clausewitz and Jomini: Contrasting Intellectual Frameworks in Military Theory,” Army History
80 (2011): 22–37.
7
Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Palo Alto: Stanford University
Press, 1962).
W h a t I s A n a l y s i s? 37
present but not put together. 8 The perception of linearity is captured by its
formulation “the system is blinking red.” Table 3.3 summarizes the differ-
ences between the Jominian and Clausewitzian approaches:
The Jominian approach pervades how analysis is done and how it is taught.
Most assessments, like American National Intelligence Estimates (NIEs),
provide a “best” estimate or “key judgments.” They may then set our alterna-
tives or excursions, but the process tends to privilege probability over conse-
quences, when in fact it is the combination of the two together that matters
to policy. This emphasis on “best bets” also runs through familiar analytic
techniques, like analysis of competing hypotheses (ACH). But “competition
for what?” The usual answer is likelihood. Indeed, the original description of
ACH, in the now-classic book by Richards Heuer, explains its goal as being to
determine “which of several possible explanations is the correct one? Which of
several possible outcomes is the most likely one?”9
A true Clausewitzian approach would rest, instead, on three principles:
8
National Commission on Terrorist Attacks upon the United States, The 9/11 Commission
Report: Final Report of the National Commission on Terrorist Attacks upon the United States
(Washington, DC, 2004). Available at http://www.9–11commission.gov/.
9
Richards J. Heuer, Psychology of Intelligence Analysis (Washington, DC: Center for the Study
of Intelligence, Central Intelligence Agency, 1999), p. 95.
38 National Intelligence and Science
could be thought of as puzzles: how many centrifuges with what capacity and
so on? Yet the critical questions were mysteries: what did Iran intend with its
enrichment program? What were the critical determinants of decisions about it?
And critically, how would Iran respond to various sticks and carrots offered by
the international community? With regard to weaponization, the NIE inferred
a conclusion about that last mystery from the puzzle it had solved: Iran’s leaders
had stopped its weaponization program at least partly in response to interna-
tional pressure.
Turning mysteries into puzzles is a temptation in analytic tradecraft. That
was a conspicuous feature of the October 2002 NIE on Iraq. The question had
narrowed to a puzzle: does Saddam have weapons of mass destruction? There
was not much “Iraq” in the NIE; even the dissents turned on technical mat-
ters, like aluminum tubes. A Clausewitzian approach can hardly eradicate that
temptation, but it might help lay out more clearly the puzzle and mystery por-
tions of the issue being assessed, and serve as a check on neglecting important
mysteries simply because we don’t know much about them.
11
A rthur Hulnick, “What’s Wrong with the Intelligence Cycle,” in Strategic Intelligence ed.
Loch Johnson (Westport CT: Greenwood, 2007), p. 1.
12
Stewart’s line was “I know it when I see it.” F. P. Miller, A. F. Vandome, and M. B. John,
Jacobellis v. Ohio (Saarbrücken, Germany: VDM Publishing, 2011).
40 National Intelligence and Science
Analysis Product
Dissemination A
Policy Analysis HUMINT Screening
Integrated Analysis Raw Data Screening
All-Source/Fusion Data Processing & Raw Data
Analysis Exploitation Analysis Distribution
Processed Data
Dissemination
The emphasis on collection in the cycle has a least three negative conse-
quences. First, it probably leads to too much collection, which is expensive, and
at the expense of too little analysis, which is cheap.14 Reviews of finished products
almost always include some analysis of “gaps”—where was intelligence missing
and thus where might different or better collection have helped “fill the gap”?
That is exacerbated by the requirements process, which provides a virtually bot-
tomless vessel to fill with information. This emphasis on quantity is explicit in
the work of nineteenth-century German historiographer Ernst Bernheim, to
whose work Kent often referred.15 For Bernheim, it was exactly the quantity of
data that could make the Auffassung—comprehension of the true meaning of
the evidence—objective. “For true certainty . . . the available data must be so
abundant and dense that it only allows room for a single connection.” When data
are insufficient, then “several combinations will be probable in the same degree,”
allowing the historian to make only “a hypothesis”—a condition which the his-
torian should remedy by making every effort to get more data.16 Yet review after
review suggests that intelligence collects more than it can absorb. In the 1970s,
for instance, James Schlesinger, at the time head of the Office of Management
and Budget (OMB), criticized the US intelligence community for the “strong
presumption . . . that additional data collection rather than improved analysis
will provide the answer to particular intelligence problems.” 17
The second problem with emphasizing collection is that doing so also
emphasizes what can be collected. Intelligence has often been accused of com-
mitting errors akin to the famous “looking for the keys under the lamppost.”
Thirty years ago, one of us (Agrell) wrote of technical intelligence that it made
it possible for the most advanced countries to get “an almost complete picture
of the strength, deployment and activity of foreign military forces.” The neg-
ative was an over-emphasis on what could be counted. Intelligence became
“concentrated on evaluation and comparison of military strength based exclu-
sively on numerical factors.”18 Agrell was emphasizing the intangibles even
14
A nthony Olcott, “Stop Collecting – Start Searching (Learning the Lessons of Competitor
Intelligence),” unpublished paper.
15
The book Kent cites, Lehrbuch der historischen Methode und der Geschichtsphilosophie, was
first published in 1889 and was subsequently republished in five further editions, the last of which
appeared in 1908—the one to which Kent refers.
16
Patrick Kelly, “The Methodology of Ernst Bernheim and Legal History,” as quoted in
Olcott, “Stop Collecting–Start Searching.”
17
“A Review of the Intelligence Community” (Washington DC: Office of Management and
Budget 1975), pp. 10, 11.
18
Wilhelm Agrell, “Beyond Cloak and Dagger,” in Clio Goes Spying: Eight Essays on the History
of Intelligence, ed. W. Agrell and B. Huldt (Lund: Lund Studies in International History 1983),
pp. 184–185.
42 National Intelligence and Science
of military strength, like morale, determination, leadership, and the like. The
broader point is that it is easier to collect data about what is going on than what
a target is thinking or what will happen.
That point is driven home by recent US experiences with intelligence in
support of warfighters. While, tactically, those DIMPIs are welcome, they
are not enough even on the battlefield. One critique emphasized that a
requirements-driven process means that intelligence spends most of times
trying to fill holes rather than questioning assumptions or exploring new
hypotheses.19 Intelligence may begin by shaping leaders’ interest, when there
is little information, but quickly turns to feeding that interest, and intelli-
gence becomes more and more bound to initial assumptions. The process
develops old insights and fails to notice the changing environment. There is
more and more focus on executing “our” plan while neglecting information
that casts doubt on the assumptions undergirding that plan. In principle,
long-term assessments should play that role, but they are seldom done given
the urgency of the immediate, usually defined in very operational terms.
Another critique echoed those themes; it was of the war in Afghanistan,
and one of its authors was in charge of intelligence for the coalition. 20 In prin-
ciple, strategic and tactical overlapped; but in practice the preoccupation with
the threat from improvised explosive devices (IEDs) meant that “the tendency
to overemphasize detailed information about the enemy at the expense of
the political, economic and cultural environment [became] even more pro-
nounced at the brigade and Regional Command levels.”21 Those latter levels
had lots of analysts, and so did the Washington-based agencies, but those ana-
lysts were starved for information from the field.
By contrast, there was lots of information on the ground, in the heads of
individual soldiers, Provincial Reconstruction Teams (PRTs), and aid work-
ers. The challenge was getting those data shared upward. The critique recom-
mended that the higher levels of command send analysts to the units on the
ground, noting that the task would not be so hard, for already there were plenty
of helicopters shuttling between PRTs and brigade and battalion headquar-
ters. Moreover, such broader analysis as did get done was very stovepiped,
addressing governance or narcotics or another topic. It was like a sportswriter
discussing all the goalkeepers in a league without talking about the teams. The
authors found only one example of “white” analysis, not just “red”—that is,
19
Steven W. Peterson, US Intelligence Support to Decision Making (Cambridge, MA: Weatherhead
Center for International Affairs, 2009).
20
Michael Flynn, Fixing Intel: A Blueprint for Making Intelligence Relevant in Afghanistan
(Washingnton, DC: Center for a New American Century, 2010).
21
Ibid., p. 8.
W h a t I s A n a l y s i s? 43
assessments not just of the bombers but of the circumstances that produced
them—about Kandahar province done by a Canadian agency.22
The critique’s principal recommendation is also provocative for intelli-
gence and its connection to policy. The military’s existing fusion centers in
Afghanistan were fine, but they operated at the level of SCI—secret, compart-
mented intelligence. Yet most of the “white” analysis was open source. Thus,
those fusion centers might have been complemented with Stability Operations
Information Centers. Those would have been places for Afghan leaders to
come and share information, along with members of the NATO international
coalition, ISAF (International Security Assistance Force).
The third problem with privileging collection is more subtle but probably also
more powerful. The collect-then-analyze model is driven by requirements (and,
in the United States by the National Intelligence Priorities Framework, or NIPF).
Collection then functions like a web scraper, scouring the world for information
against those requirements and storing it for possible use, now or later. On first
blush, that sounds impressively Google-esque. Yet, as Anthony Olcott notes,
competing search engines produce the same results as little as 3 percent of the
time.23 They usually produce very different results, and even if the same websites
appear, the search engines frequently rank them differently.24 And that leaves
aside the so-called dark web, which does not show up in searches but is perhaps a
thousand times larger than the known web.25 As a result, “collection as Google”
is certain to omit far more relevant information than it gathers.
22
District Assessment: Kandahar City, Kandahar Province (Ottawa: Canadian Department of
Foreign Affairs and International Trade, 2009), cited in Ibid., p. 18.
23
A manda Spink et al., “Overlap among Major Web Search Engines,” in Third International
Conference on Information Technology: New Generations (Los Alamitos, CA: Institute of Electrical
and Electronics Engineers, 2006).
24
Anthony Olcott, “Institutions and Information: The Challenge of the Six Vs” (Washington,
DC: Institute for the Study of Diplomacy, Georgetown University, 2010).
25
Chris Sherman and Gary Price, The Invisible Web: Uncovering Information Sources Search
Engines Can’t See (Meford, NJ: Information Today, 2001).
44 National Intelligence and Science
26
Gregory F. Treverton and C. Bryan Gabbard, Assessing the Tradecraft of Intelligence Analysis
(Santa Monica: RAND Corporation, 2008), pp. 33–34.
27
For a nice discussion of confirmation bias and what might be done about it, see Paul Lehner,
Avra Michelson, and Leonard Adelman, Measuring the Forecast Accuracy of Intelligence Products
(Washington DC: MITRE Corporation, 2010).
28
R aymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,”
Review of General Psychology 2, no. 2 (1998): 175–220.
29
L ehner, Michelson, and Adelman, “Measuring the Forecast Accuracy of Intelligence
Products,” pp. 4–5.
30
Philip Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton,
NJ: Princeton University Press, 2005), p. 149.
W h a t I s A n a l y s i s? 45
if the predicted event didn’t occur, they recalled their confidence as only
50 percent. 31
Intelligence analysts may be particularly prone to these failings because
of the verbal imprecision with which their estimates are all too often stated.
It is easy for memory to inflate “a fair chance” into near certainty if the fore-
cast event actually happens. “May occur” may be remembered as accurate no
matter which way the event actually turned out. At least since Sherman Kent,
those who manage intelligence analysis have tried to introduce more precision
into the language of estimates. In his charming account, his effort to quantify
what were essentially qualitative judgments, what he called the “mathemati-
cian’s approach,” met opposition from his colleagues, ones he labeled the
“poets.”32 Kent regarded them as defeatist. They saw his effort as spurious pre-
cision in human communications. In the latest US effort, American National
Intelligence Estimates now come with a paragraph defining terms and a chart
arranging those words in order of probability. 33
A close kin of the confirmation bias is common sense and circular reason-
ing. As Duncan Watts, who moved from biology to sociology, puts it: “com-
mon sense often works just like mythology. By providing ready explanations
for whatever particular circumstances the world throws at us, commonsense
explanations give us the confidence to navigate from day to day and relieve
us of the burden of worrying about whether what we think we know is really
true.”34 Common sense is durable. Perhaps the most famous episode of that
durability is the US September 1962 National Intelligence Estimate that the
Soviet Union would not install nuclear missiles in Cuba, mentioned in the
previous chapter. That was perhaps the most prominent mistake Sherman
Kent’s estimators made. Yet in his postmortem, Kent attributed the mistake
more to Khrushchev than to the estimators. In Kent’s words, “We missed the
Soviet decision to put the missiles into Cuba because we could not believe that
Khrushchev could make a mistake.”35 Such was Kent’s faith in what he called
“the dictates of the mind.”36 What Khrushchev did was not rational.
31
For a psychological analysis for the roots of this hindsight bias, see Neal J. Roese and
Kathleen D. Vohs, “Hindsight Bias,” Perspectives on Psychological Science 7, no. 5 (2012): 411–426.
32
See Sherman Kent, “Words of Estimative Probability,” Studies in Intelligence. 8, no. 4 (Fall
1964): 49–65.
33
See, for instance, the declassified Key Judgments in National Intelligence Council,
Iran: Nuclear Intentions and Capabilities, National Intelligence Estimate (Washington, DC,
2007).
34
Duncan J. Watts, Everything Is Obvious: Once You Know the Answer (New York: Crown,
2011), p. 28.
35
Sherman Kent, “A Crucial Estimate Relived,” Studies in Intelligence 8, no. 2 (1964).
36
Sherman Kent, Writing History (New York: F.S. Crofts, 1941), p. 4.
46 National Intelligence and Science
Common sense is thus close to mirror imaging: what would make sense for us
to do if we were in their shoes? To be fair, even in retrospect, it is hard to compre-
hend Khrushchev’s decision. It simply seems to carry too much risk for too little
potential gain, all the more so from a conservative leader like him. Yet there is a
certain circular reasoning to all these cases. As Olcott puts it: “we explain phe-
nomena by abstracting what we consider to be causal factors from the phenom-
ena, and then attribute their success to the presence of those characteristics, in
effect saying “X succeeded because it had all the characteristics of being X.” Even
more important, when a “common sense” explanation fails, we tend to generate
explanations for that failure much as Kent and his team sought explanations for
their miscalculation in the NIE, in effect arguing that “Y failed because it did not
have the characteristics of being X. If only the situation had included a, b, and c,
then it would have been a success.”37
One source of mistaken assessment lies more on the shoulders of policymakers
than intelligence analysts. That is the “inconvenient” alternative. A vivid example
is the imposition of martial law in Poland by the Polish government in 1981. That
was probably not the “worst” alternative from NATO’s perspective; Soviet inter-
vention might have been worse in many senses. Yet all of US and NATO plan-
ning was based on the assumption that if martial law came to Poland, it would be
imposed by the Soviet Union. For the Poles to do it themselves was inconvenient.
When one of us (Treverton) teaches about the intelligence-policy interaction, one
of his cookbook questions is this: ask yourself not just what is the worst alterna-
tive, but also what is the most inconvenient alternative?
All too often the inconvenient alternative is specifically inconvenient to the
plans of a particular government or leader. On July 1, 1968, President Johnson
announced, at the signing of the Non-Proliferation Treaty, that the Soviet
Union had agreed to begin discussions aimed both at limiting defenses against
ballistic missiles and reducing those missiles. But neither date nor place for
the talks had been announced when, on August 20, the Soviet Union began its
invasion of Czechoslovakia, an event that postponed the talks and denied the
president a foreign policy success. Intelligence presaging the Soviet invasion
of Afghanistan was similarly inconvenient, hence unwelcome, for President
Jimmy Carter in 1979. First indications came when Carter was on the plane
to Vienna to sign the SALT II treaty. Later, when the evidence of the invasion
was clearer, SALT II was awaiting ratification on Capitol Hill. In both cases,
the warning was unwelcome to the president. 38
For the US assessments on Afghanistan, see D. J. MacEachin, Predicting the Soviet Invasion
38
of Afghanistan: The Intelligence Community’s Record (Washington, DC: Center for the Study of
Intelligence, Central Intelligence Agency, 2002).
W h a t I s A n a l y s i s? 47
39
Jervis, Why Intelligence Fails.
48 National Intelligence and Science
including from Jordan’s King Hussein. They also, it turned out, had a spy close
to Sadat’s inner circle. For most of the runup to the war, his detailed reporting
on capabilities tended to play into the concept. He reported in detail on what the
Soviets were—or, rather, weren’t—providing by way of capabilities to Egypt, and
so reinforced the idea that Sadat wouldn’t start a war he couldn’t win.40
40
U. Bar-Joseph, The Watchman Fell Asleep: The Surprise of Yom Kippur and Its Sources
(Albany: State University of New York Press, 2005), pp. 21–23. For the employment of and
reliance on “defining factors” in intelligence assessments, see Wilhelm Agrell, Essence of
Assessment: Methods and Problems of Intelligence Analysis (Stockholm: National Defence College,
Center for Asymmetric Threat Studies, 2012), Chapter 7.
41
“Respectfully Quoted: A Dictionary of Quotations,” ed. Suzy Platt (Washington,
DC: Library of Congress, 1989), p. 80.
42
Kent, Strategic Intelligence for American World Policy.
W h a t I s A n a l y s i s? 49
43
Willmoore Kendall, “The Function of Intelligence,” World Politics 1, no. 4 (1949): 542–552.
44
I bid., p. 545. See Anthony Olcott, “Revisiting the Legacy: Sherman Kent, Willmoore
Kendall, and George Pettee—Strategic Intelligence in the Digital Age,” Studies in Intelligence 53,
no. 2 (2009): 21–32.
50 National Intelligence and Science
of recruited spies, which can be evaluated immediately, rather than what really
matters—the quality of their information, which may take years to assess.45
In analysis, the first stretch of the road not taken is the separation of “them”
from “us”—the sense that if politics stopped at the water’s edge going outward,
intelligence should stop at the water’s edge coming inward. The separation led
Kent—and still leads intelligence analysis—to draw a bright white line sepa-
rating foreign, other states, and what Kent called “our own domestic scene.”
As a result, in Kendall’s words, “intelligence reports . . . never, never take cog-
nizance of United States policies alternative to the one actually in effect, such
problems being “domestic” matters.”46
In his book, Kent endorsed Walter Lippmann’s view a quarter century ear-
lier that “every democrat feels in his bones that dangerous crises are incom-
patible with democracy, because the inertia of the masses is such that a very
few must act quickly.”47 Thus, in Lippmann’s words, “The only institutional
safeguard is to separate, as absolutely as it is possible to do so, the staff which
executes from the staff which investigates.”48 In those circumstances, the only
way to ensure what Kent called “impartial and objective analysis” was to cre-
ate, in Lippmann’s words, “intelligence officials” who would be “independent
both of the congressional committees dealing with that department and of the
secretary at the head of it” so that “they should not be entangled either in deci-
sion or in action.”49 For Kendall, by contrast, intelligence and policy together
needed to confront what Kendall called “the big job—the carving out of the
United States destiny in the world as a whole.”50
The rub with the Kent view, of course, is that his “impartial and objective
analysis” might also be irrelevant. With such distance between the two, intel-
ligence had little hope of knowing what policy officials knew or needed to
know, on what timetable and in what way. Kent himself recognized the prob-
lem in a 1948 letter to then-Director of Central Intelligence Admiral Roscoe
Hillenkoetter: “Since [ORE, the CIA’s Office of Research and Analysis] has
no direct policy, planning, or operating consumer to service within its own
organization . . . it is likely to suffer . . . from a want of close, confidential, and
friendly guidance.” His solution was that
The exaggerated and in many ways dysfunctional role of clandestine collection is not
45
unique for the United States but was widespread during the Cold War—for instance, in the
German BND under the directorship of Gerhard Gehlen, see Jeffery T. Richelson, A Century of
Spies: Intelligence in the Twentieth Century (New York: Oxford University Press, 1995).
46
Kendall, “The Function of Intelligence,” p. 549.
47
Walter Lippmann, Public Opinion (New York: Harcourt, Brace, 1922), p. 272.
48
Ibid., pp. 384, 386.
49
Kent, Strategic Intelligence for American World Policy, p. 100. Ibid., p. 61.
50
Kendall, “The Function of Intelligence,” p. 548.
W h a t I s A n a l y s i s? 51
ORE should be brought into closest and most direct contact with
consumers such as the National Security Council . . . having an ORE
officer represent CIA (or participate in CIA’s representation) at NSC
staff discussions would have two great benefits: (a) It would assure
ORE of knowing the precise nature of the consumer’s requirements;
and (b) it would enable ORE to convey to the consumer the precise
dimensions of its capabilities. It is to be noted that these two mat-
ters interlock: when the consumer knows ORE’s capabilities, he
may change the dimensions of this requirement (add to it, lessen
it, or reorient it), and, when ORE knows the precise dimensions of
the requirement, it may deploy its resources in such a fashion as to
enlarge its capabilities. So long as liaison between consumer and
ORE is maintained by someone not possessed of the highest profes-
sional competence in matters of substance and firsthand knowledge
of ORE’s resources, that liaison is almost certain to be inadequate for
the purposes of both ORE and the consumer. 51
The idea was hardly new. Several years earlier, Assistant Secretary of State
Donald Russell had tried something very similar, a recommendation made by
the then-Budget Bureau (later OMB, the Office of Management and Budget),
which was a participant in postwar discussions of intelligence. For Russell,
“the principal intelligence operations of the Government should be organized
at the point where decision is made or action taken, i.e., at the departmental,
or lower, level and not within any single central agency.” If not, “the policy
recommendations of a research unit which is not organizationally integrated
with operations are very likely to be theoretical judgments with little basis in
reality.”52 Yet Russell’s initiative died. The creation of the CIA was precisely
that single central agency against which Russell had warned. The logic of the
CIA’s creation was Pearl Harbor and a check on department intelligence: it
should make sure it has all the information and so can serve as a counterweight
to temptations by departmental intelligence agencies to cut their assessments
to suit the cloth of their operators.
The Russell idea was not revived despite a series of blue-ribbon panels,
mostly addressing efficiency but all making the point that much of the inef-
ficiency derived from the lack of feedback from policy officials. In 1966, the
CIA inspector general—in what is generally referred to as the Cunningham
Report—responded to criticisms that the community had failed to “adequately
Foreign Relations of the United States, 1945–1950: Emergence of the Intelligence Establishment
51
The second stretch of Kendall’s road not taken is based on analysis of Kent’s
underlying approach. Kent argued that intelligence was the same in peace as
in war, but for Kendall the two were very different. Wartime intelligence is
53
The Cunningham Report, presented in December 1966, had not been fully declassified. Lines
from it are quoted in the Church Committee Report, “Foreign and Military Intelligence: Book
1: Final Report of the Select Committee to Study Governmental Operations with Respect to
Intelligence Activities” (Washington: United States Senate, 1976), hereafter cited as Church
Committee Report. In addition, the Cunningham Report was quoted and also summarized
in “A Historical Review of Studies of the Intelligence Community for the Commission on the
Organization of the Government for the Conduct of Foreign Policy,” (document TS–206439–
74) (1974).
54
“A Review of the Intelligence Community,” pp. 1, 9, prepared by the Office of Management
and Budget under the direction of OMB Deputy Director James Schlesinger, March 10, 1971.
55
Church Committee Report, p. 277.
W h a t I s A n a l y s i s? 53
primarily tactical because the enemy is known and the objectives are clear. By
comparison, peace requires more strategic analysis because neither a nation’s
objectives nor those of adversaries can be taken as a given. The “big job” had
to be defined; it could not simply be assumed. Moreover, the emphasis on war-
time also underpins the distinction between us and them. That puts the assess-
ing of “them” “in the hand of a distinct group of officials whose “research” must
stop short at the three-mile limit even when the threat they are following runs
right across it, and yet which tells itself it is using the scientific method.”56
Perhaps the Cold War was enough like a hot one to make the conflation of
war and peace less dangerous than it is now. As Olcott and Kerbel put it: “The
kind of analytic support that Kent envisioned—analysts standing behind poli-
cymakers ‘with the book opened at the right page, to call their attention to
the stubborn fact they may neglect’ almost inevitably drives analytic support
toward tactical intelligence, rather than the strategic, but it worked well for the
IC’s [intelligence community’s] Cold War glory years, because the nature of
the Soviet Union and the means to face it were such that tactics all but merged
with strategy.”57 Notice that the great successes of Cold War intelligence were
puzzles—do Soviet missiles have multiple warheads, how accurate are they?
They weren’t exactly tactical, but they were technical and about capabilities.
Soviet strategy was more assumed than analyzed.
Kent’s emphasis on professionals, on both the policy and intelligence side,
and on “producers” and “consumers” was, for Kendall, a quintessentially
bureaucratic perspective. It made intelligence analysts “mere research assis-
tants to the George Kennans.” More tellingly, it excluded elected officials and
what Kendall thought most crucial, “communication to the politically respon-
sible laymen of the knowledge which . . . determines the ‘pictures’ they have in
their heads of the world to which their decisions relate.” For Kendall, Kent’s
approach reinforced a “crassly empirical conception of the research process
in the social sciences,” one organized by region and dominated by regional
specialists. In the terms of this chapter, it was more puzzle-solving than fram-
ing mysteries. The task was “somehow keeping one’s head above water in
a tidal wave of documents, whose factual content must be “processed,” and
the goal was prediction, “necessarily understood as a matter of projecting dis-
cernible empirical trends into an indefinite future.” Again, the approach may
have worked well enough during the Cold War. The Soviet Union was linear
and static enough that it could be analyzed tactically. It was also complicated
56
Kendall, “The Function of Intelligence,” p. 549.
57
Josh Kerbel and Anthony Olcott, “Synthesizing with Clients, Not Analyzing for Customers,”
Studies in Intelligence 54, no. 4 (2010): 13. The Kent quote is from Strategic Intelligence for American
World Policy, p. 182.
54 National Intelligence and Science
60
George Pettee, The Future of American Secret Intelligence (Washington, DC: Infantry
Journal Press, 1946), p. 65.
61
Ibid., p. 39.
4
Intelligence Challenges
in Scientific Research
1
The main aspect of scientific under-performance that has resulted in reactions similar
to intelligence postmortems has been research frauds, perceived as a major threat to public
and collegial credibility. There is considerable research on scientific fraud and misconduct.
See, for example, Stephan Lock, Fraud and Misconduct in Medical Research, ed. Frank Wells
(London: BMJ Publishing Group, 1993); Marcel C. LaFollette, Stealing into Print: Fraud,
Plagiarism, and Misconduct in Scientific Publishing (Berkeley: University of California Press,
1992); David J. Miller and Michael Hersen, Research Fraud in the Behavioral and Biomedical
Sciences (New York: John Wiley, 1992); Sheldon Krimsky, Science in the Private Interest: Has
the Lure of Profits Corrupted Biomedical Research? (Lanham: Rowman and Littlefield, 2003);
Stephan Lock and Frank Wells, Fraud and Misconduct: In Biomedical Research, ed. Michael
Farthing (London: BMJ Books, 2001).
2
J. Keegan, Intelligence in War: Knowledge of the Enemy from Napoleon to Al-Qaeda (Knopf
Doubleday, 2003), p. 384.
55
56 National Intelligence and Science
3
The line “there’s no success like failure” is quote from Bob Dylan, “Love minus zero/no
limit,” on Subterranean Homesick Blues (1965). The verse nevertheless continues: “. . . and failure’s
no success at all.”
4
Several of the postmortems of the estimates on Iraqi WMD prior to the 2003 war were con-
ducted from a political and judicial rather than an intelligence perspective. True enough, intel-
ligence organizations in some cases prefer amateur investigations that could be more susceptible
to defensive deception and coverup for irregularities.
5
Karl Raimund Popper, Conjectures and Refutations: The Growth of Scientific Knowledge
(London: Routledge and Kegan Paul, 1969), s. 256.
Intelligence Challenges in Scientific R esearch 57
Controversies in Science
Given these self-correcting mechanisms, scientific controversies are not
uncommon and could, as both Popper and Medvedev underline, be regarded
as a necessity for the advancement of knowledge. Scientific consensus, there-
fore, is not necessarily a positive sign, especially not if consensus implies that
attempts to refute the dominating theories diminish or cease. In Thomas
S. Kuhn’s concept of scientific paradigms, however, the established “normal
science” develops precisely these conservative and self-confirming mecha-
nisms, where research is accepted if in line with the dominating paradigm, and
regarded with suspicion or ignored if not.12 The actual controversies, however,
do not appear in the ideal world of the philosophy of knowledge, but in the real
world, one complete with individuals, institutions, schools of thought, and,
not least, within an external social and political context.
A scientific controversy can be defined as publicly and persistently maintained
conflicting knowledge claims on issues that, in principle, can be determined by
10
The “Devil’s Advocate” institution can be seen as a substitute for self-correcting mecha-
nisms within a closed and compartmentalized bureaucracy. For the “Devil’s Advocate”
method, see in this context Robert Jervis, Perceptions and Misperceptions in International Politics
(Princeton, NJ: Princeton University Press, 1976), p. 415.
11
Jervis, Why Intelligence Fails, p. 24.
12
Kuhn, The Structure of Scientific Revolutions.
Intelligence Challenges in Scientific R esearch 59
scientific means and where both contenders thus claim scientific authority.13 This
reference to scientific authority helps explain the intensity of many of these contro-
versies. Some scientific controversies are solved, while others are not and remain
endemic in academia, sometimes manifesting themselves in the split of disciplines
or research fields. Other controversies are simply ignored or embedded in academic
diversity; disciplines with distinctly different methods and theoretical foundations
normally do not clash since they see each other as too distant to be relevant. Here,
no attempted refutation takes place; there is neither much by way of incentives nor a
common ground on which a meaningful dialogue could be based.
In other instances, where the contenders are relatively close to each other,
share the same, or at least some, theoretical and empirical foundation, and not
least advance interpretations with overlapping explanatory claims, controver-
sies cannot normally be ignored or embedded. This is especially the case when
these competing claims attract external attention or have more or less immediate
relevance in the society. The rift between Scandinavian political science and the
emerging peace and conflict research from the 1960s to the 1990s did not stem
from differences in the intellectual foundation but rather in different attitudes to
values and in subsequent conflicting claims to explain and deal with identical or
similar problems, such as the arms race and nuclear deterrence. Some scientific
controversies thus resemble conflicts over intelligence puzzles, while others have
dimensions similar to mysteries, where the framing of the issue at stake, the key
variables, and how they might interact is essential.
However, solving scientific controversies is more complex than just determin-
ing whose knowledge claims are justified and whose are not. In controversies
over more abstract issues in pure science this has often been the case as new data
become available. One such example is the controversy over the continental drift
theory that lasted for almost half a century but remained an issue mainly for the
earth scientists, without immediate political, economic, or social aspects and thus
was largely confined to the scientific domain.14 When such aspects are present,
and where the scientific dispute is embedded in a technological, environmental,
or ideological context, solving tends to take other forms, including the interven-
tion of nonscientific actors and stakeholders, as in the case of nuclear energy.15
13
Ernan McMullin, “Scientific Controversy and Its Termination,” in Scientific Controversies: Case
Studies in the Resolution and Closure of Disputes in Science and Technology, ed. Hugo Tristram
Engelhardt and Arthur Leonard Caplan (Cambridge: Cambridge University Press, 1987), p. 51.
14
Henry Frankel, “The Continental Drift Debate,” in Scientific Controversies: Case Studies in
the Resolution and Closure of Disputes in Science and Technology, ed. Hugo Tristram Engelhardt
and Arthur Leonard Caplan (Cambridge: Cambridge University Press, 1987), p. 203.
15
On the nuclear energy controversy, see Spencer R. Weart, “Nuclear Fear: A History and an
Experiment,” in Scientific Controversies: Case Studies in the Resolution and Closure of Disputes in Science
and Technology, ed. Hugo Tristram Engelhardt and Arthur Leonard Caplan (Cambridge: Cambridge
60 National Intelligence and Science
• Resolution means that the controversy is resolved in the sense that both
sides accept one of the contesting views, or a modified middle view. This
way to terminate a controversy is in line with the self-image of science as
open, critical, and free from nonscientific prejudice.
• Closure means that the controversy is terminated through the intervention
of an external force, one not necessarily bound by the actual merits of the
case. This closure can take the form of a court ruling, the withdrawal of
funds, or a reorganization of research institutions. Closing the controversy
does not solve it, and the disagreement could still exist under the surface.
• Abandonment appears in situations where the contested issue disap-
pears, either because the contenders lose interest, or grow old and die,
or because the issue becomes less relevant and is bypassed by other
discoveries or theories.
One of the prolonged and intense scientific disputes, displaying both the
devastating effects and the insufficiency of closure as a method to terminate a
fundamental scientific controversy is the rise and fall of T. D. Lysenko and the
subsequent destruction and eventual rebuilding of Soviet biological research
from the 1930s to the 1960s.17 The agronomist Lysenko and his followers
managed to establish hegemony for the biological quasi-science of “lysenko-
ism,” based on the theory that plants and livestock could be improved through
“re-education” and that acquired characters could be inherited. The case was
supported by research results presented by Lysenko and his followers, rhetori-
cal links to Marxist-Leninist dialectics, and not least the promise of deliveries in
the form of grossly improved productivity in Soviet agriculture. The hegemony
was achieved in a fierce and literally deadly scientific controversy, in which
Lysenko from the late 1930s onward denounced his opponents as representa-
tives of a “bourgeois” science, and as such, enemies of the people, an accusation
making them fair game for a witch-hunt by the NKVD (Peoples Commissariat
for Internal Affairs), the Soviet internal security service at the time. The flawed
theories of Lysenkoism, and the dead hand it lay over Soviet biological research
had a severe impact on Soviet agriculture over several decades.
University Press, 1987), and M. Bauer, Resistance to New Technology: Nuclear Power, Information
Technology and Biotechnology (Cambridge: Cambridge University Press, 1997).
16
McMullin, “Scientific Controversy and Its Termination.”
17
Medvedev and Lysenko, The Rise and Fall of T. D. Lysenko, and David Joravsky, The Lysenko
Affair (Cambridge, MA: Harvard University Press, 1970).
Intelligence Challenges in Scientific R esearch 61
Controversies in Intelligence
Controversies in intelligence appear in a setting that is different from science
in both an epistemological and organizational sense. The scientific culture and
the intelligence culture, while sharing some characteristics, nevertheless, as
discussed in the second chapter, are distinctively different in a number of ways.
Both strive, on a formal and philosophical level, to uncover, explain, and assess
elements of what is assumed to be observable reality. Both have a common,
though perhaps not universally accepted, ethos of impartiality and honesty in
terms of facts and findings. Both detest fraud as unprofessional and destructive
for the credibility of all knowledge-producing activities. Intelligence, however,
keeps its fingers crossed behind its back when it comes to fraud intended as dis-
information and deception. Intelligence, furthermore, uses means and produces
outputs that would violate research ethics, while some such practices instead
are perfectly in line with a different intelligence ethic. The duty of intelligence
is to protect competitive advantages in terms of technology, software, or human
resources, in the process often producing results that to lesser or higher degree
are sensitive or damaging for individuals, states, or the international community.
The main difference is perhaps not the issues as such but rather the ways
these issues are dealt with, the functions of the intelligence machinery, its legal
basis, and not least its cultural setting—the strange and somewhat elusive
intelligence culture described by Michael Herman as a combination of a sense
of being different, of having an exclusive mission, and the multiplying factor
of secrecy.19 Many researchers also share a sense of being different (sometimes
even superior) as well as a sense of mission, not only in terms of scientific prog-
ress per se but also toward society and issues on a global scale, some of which
the researchers perceive as both more over-arching and more important than
narrowly defined intelligence requirements. Secrecy and its consequences in
terms of closeness, compartmentalization, the need-to-know principle, pro-
tected sources, and the operational link to decision making thus stand out as
the main differences, producing a very different setting for the generation and
handling of intelligence controversies, compared to those within science.
A first observation is that in intelligence, being a closed knowledge-producing
entity, many or possibly most controversies that do emerge remain out of sight
of the public domain and are in many cases contained within organizations
and thereby invisible not only while they still exist but also in retrospect. They
might leave only traces in the archives—for example, the increasing contro-
versy in the US intelligence community over the NIEs of the future develop-
ments in the Soviet Union from the late 1980s onward. But they might also
stay confined to arguments within intelligence services and remain undocu-
mented in intelligence assessments, only remembered by those drawn into
the dispute. One example is an undercurrent of critique in Swedish military
intelligence in the second half of the 1970s directed toward the dominating
concepts of détente and of the Soviet Union having renounced any aggressive
ambitions toward the West. As a competing hypothesis, this critique was nei-
ther strange nor unique. It merely mirrored a growing and increasingly loud
concern among traditionalists that things were not quite what they seemed to
be after the 1975 Helsinki Treaty, and that the West, while drinking innumer-
able toasts to peace and cooperation, was slowly but steadily marching into a
trap. This dissenting view, in contrast to the case in the United States, never
appeared in intelligence production; it remained a critical, undocumented
undercurrent confined to casual corridor meetings, coffee room discussions,
and confessions behind closed doors. This unresolved and almost invisible
controversy played an important role in the re-definition of Sweden’s geostra-
tegic position in the early 1980s in the second Cold War and under the influ-
ence of the so-called submarine crisis with the Soviet Union. 20
For the Submarine Crisis, see Fredrik Bynander, The Rise and Fall of the Submarine
20
Threat: Threat Politics and Submarine Intrusions in Sweden 1980–2002 (Uppsala: Uppsala Acta
Universitatis Upsaliensis, 2003).
Intelligence Challenges in Scientific R esearch 63
21
On Zeira’s and the Research Department’s assessment prior to the war, see Bar-Joseph, The
Watchman Fell Asleep.
22
Richard K. Betts, Enemies of Intelligence: Knowledge and Power in American National Security
(New York: Columbia University Press, 2007).
23
Bar-Joseph, The Watchman Fell Asleep, pp. 90–92.
64 National Intelligence and Science
24
R obert Jervis gives a small example of this in his account of the CIA’s assessments on Iran
prior to the fall of the Shah in 1979. Older assessments from 1964, that Jervis suspected could be
of relevance, had been removed from the archive and sent to “dead storage” from where it would
have taken weeks to get them retrieved. Jervis, Why Intelligence Fails, p. 25.
25
One example of this is the widespread popular disbelief in official assessments of the
risks from the fallout from the Chernobyl nuclear disaster in 1985. See Angela Liberatore, The
Management of Uncertainty: Learning from Chernobyl (Amsterdam: Gordon and Breach, 1999).
Intelligence Challenges in Scientific R esearch 65
26
Popper, Conjectures and Refutations, p. 257.
27
Ibid.
28
Ibid., pp. 257–258.
66 National Intelligence and Science
where the criteria of refutation often are difficult and sometimes impossible to
fulfill, at least in the time-frames in which intelligence normally operates. One of
the most discussed cases of 20th-century intelligence analysis—the ill-fated US
assessments of Soviet intentions on Cuba in 1962—illustrate this; the case has
been mentioned in Chapter 3.29
In August 1962, the newly appointed director of the Central Intelligence
Agency, John A. McCone, raised the issue that the Soviets might try to place
nuclear missiles on Cuba to offset the US superiority in terms of strategic weap-
ons, and he saw the deployment of Soviet air defense systems on the island as
verification. Being an amateur in intelligence matters, McCone’s hunch was
regarded as belonging below the line of demarcation, a not very uncommon
eventuality in the interaction between specialists and non-specialists, whether
in intelligence or in science. However, given McCone’s position, his hunch
could not just be discarded out of hand, and the result was reflected in the
Special National Intelligence Estimate (SNIE)—in many respects a revealing
exercise in which a hypothesis from below the demarcation line was duly dealt
with by the experts convinced they were operating above the line. Based on
observations in terms of previous Soviet behavior and common sense logic,
what the Soviets ought to regard as being in their own interest, the hunch
was put back where it belonged, only to reemerge a few weeks later, first as a
disturbing and illogical anomaly in the intelligence flow and then confirmed
beyond doubt as a prima facie termination of an intelligence controversy. 30
The problem in the Cuban case, as observed by many subsequent commen-
tators, was that the analysts mistook their provisional knowledge for actual
knowledge, and filled in the blank spots with assumptions of how things should
be and how the Soviets “must” think. While appearing, and certainly perceived,
as solid ground, these statements were in fact metaphysical and just as much
below the line as McCone’s ideas, although as it turned out, in a more dangerous
mode as it was ignorance perceived as verified knowledge. The Cuban case also
illustrates the practical implication of Popper’s point about perpetual motion.
The hypothesis that nuclear missiles were deployed or were in the process of
being deployed was in principle testable, although from an intelligence collec-
tion perspective the task would be difficult and time-consuming if not sharply
different from a vast number of other intelligence tasks regarding development,
SNIE 85–3–62, The Military Buildup in Cuba, September 19, 1962, in Mary S. McAuliffe,
29
CIA Documents on the Cuban Missile Crisis, 1962 (Washington, DC: Central Intelligence Agency,
1992). For the background of the SNIE, see James G. Blight and David A. Welch, Intelligence and
the Cuban Missile Crisis (London: Frank Cass, 1998).
30
On the employment of common sense and pattern analysis in the Cuban case, see Agrell,
Essence of Assessment: Methods and Problems of Intelligence Analysis (Stockholm: National Defence
College, Center for Asymmetric Threat Studies, 2012).
Intelligence Challenges in Scientific R esearch 67
31
On the finding of the Iraqi Study Group and their implications for the evaluation of previ-
ous intelligence assessments, see Betts, Enemies of Intelligence.
32
Thomas F. Gieryn, “Boundary-Work and the Demarcation of Science from Non–
Science: Strains and Interests in Professional Ideologies of Scientists,” American Sociological
Review (1983): 781–795.
33
Rachel Carson, Silent Spring (Boston: Houghton Mifflin, 1962).
68 National Intelligence and Science
ability to collect data, the knowledge to interpret them, and not least the undis-
putable authority to underpin their conclusions as well as implicit or explicit
policy recommendations. But Silent Spring had one further aspect: that of
allowing readers to visualize an impending disaster. “Boundary work” in this
and other existential issues, like the arms race and nuclear energy, was about
the duty of the scientists to reach out beyond the mere academic limits.
In the 1980s, acid rain became an important environmental issue both in
North America and in Western Europe. 34 The issue was raised and named by
scientists in the 1970s, and the scientists were instrumental in framing it for
the general public and the policymakers. In the United States this resulted in
a national research program designed not only to monitor and assess acid pre-
cipitation but also to assist lawmakers with scientific expertise in hearings. 35 On
the political level, the issue of acid rain and subsequent limitations of emissions
from industry was controversial and deeply polarized. The scientists encoun-
tered not a single audience but two distinct sides in a policy process that was
highly adversarial. It was not just a question of speaking truth to power, but
what truth and to which power? Sc ientific objectivity was put under strain,
both in terms of expectations from policymakers and also the ambition of sci-
entists to—at least momentarily—leave the traditional scientific role and give
policy advice. 36
The scientist’s role became problematic not only due to the pressure from
expectations and the form of interaction between science and policy in hear-
ings; the complexity of the issue, and uncertainty regarding the conclusions
on causes and effects, blurred the concept of knowledge. If the scientists them-
selves did not know for sure, then just what was the basis for their claim of
knowledge? And who was then the expert, after all? Another, and as it turned
out increasingly important, factor was that of urgency. Acid rain could have
serious environmental impacts, but once limited, these effects were assumed
to gradually decrease. In Central and Northern Europe, however, the acid rain
34
Due to early discoveries of Svante Odén, professor in Soil Science, acidification was
acknowledged as a major environmental threat in Sweden in the late 1960s and emissions were
drastically reduced through legislation. However, attempts to raise the issue internationally
proved unsuccessful, and Swedish and Norwegian efforts came to nothing until the problem
reached the policy agendas in the United States and Western Germany. See Lars J. Lundgren,
Acid Rain on the Agenda: A Picture of a Chain of Events in Sweden, 1966–1968 (Lund: Lund
University Press, 1998).
35
Stephen Zehr, “Comparative Boundary Work: US Acid Rain and Global Climate Change
Policy Deliberations,” Science and Public Policy 32, no. 6 (2005): 448.
36
Zehr (2005) and Lundgren (1998) point at the risk of scientists, in dealing with acid rain, in
becoming too much guided by what should be done, and gravitating toward what could be done,
with the associated risk of simplification and the resulting tendency toward monocausal think-
ing. Ibid., pp. 448–450, and Lundgren, Acid Rain on the Agenda, p. 292.
Intelligence Challenges in Scientific R esearch 69
issue became framed in a different way beginning in the early 1980s. It was
connected to an observed, widespread decline in forest vitality, especially in
Central Europe (Czechoslovakia, East Germany, Poland, and West Germany).
In West Germany alone, a survey conducted in 1983 revealed—or seemed to
reveal—that 34 percent of all forested land was affected. 37 Due to the long life
span of trees, observations could indicate disastrous effects of exposure that
had accumulated over a long time.
The discovery of forest death had a long pre-history, with known intervals
of decline in growth over the centuries. The link with manmade influences,
especially air pollution over remote distances, was not highlighted until the
1970s, when observations from several locations in Europe were connected
and a common cause was sought. The main breakthrough came when the
German forest ecologist Bernhard Ulrich put forward the hypothesis of a com-
bined cumulative effect of increased acidity in precipitation, leading to altered
soil chemistry causing the death of the tree’s fine roots. 38 This hypothesis con-
nected the acid rain debate and a seemingly new ecological (and potentially
economic) threat. The forest death concept was rapidly established in the
German public debate under the designation Waldsterben and came to acti-
vate earlier misgivings about the environmental impact of long-range air pol-
lution in other countries as well. Policy response was swift: in the early 1980s
the German government reversed its previous stance resisting international
reduction of sulfur emissions, and instead actively promoted an multi-national
agreement. 39
Not all scientists shared the alarmist perception of imminent catastrophic
forest damage; that was particularly so for those who had studied forest disease
and growth over time. Empirical forestry research, however, was by definition
a slow process, and publication in scientific journals suffered from a consider-
able time lag. And once the forest death had been established in media, there
was little interest in critical remarks or data casting increasing uncertainty.40
The rapid, and almost total, impact of the initial scientific warnings on media,
public opinion, and policy placed the scientific community in a difficult situ-
ation in which there was a demand for rapid scientific results to initiate coun-
termeasures. In the Swedish debate, the concept of forest death had an impact
similar to that in Germany even though data on growth and damage was far
37
Jan Remröd, Forest in Danger (Djursholm: Swedish Forestry Association, 1985).
38
B. Ulrich, “Die Wälder in Mitteneuropa. Messergegnisse Ihrer Umweltbelastung,
Theorie Ihre Gefärdung, Prognosen Ihre Entwicklung,” Allgemeine Forstzeitschift 35, no. 44
(1980): 1198–1202.
39
Nils Roll-Hansen, Ideological Obstacles to Scientific Advice in Politics: The Case of “Forest
Death” from “Acid Rain” (Oslo: Makt–og demokratiutredningen 1998–2003, 2002), p. 8.
40
Ibid., p. 9.
70 National Intelligence and Science
less alarming. Research programs were designed to study the complex ecologi-
cal interaction behind forest death, not to question the concept or cast doubt
on the mathematical models for calculating future decline. Scientists were also
subjected to a demand pull for results that could be used by the government
in international negotiations, and the repelling effect from a polarized public
debate where critics of the acid rain–forest death interpretation were seen as
acting on behalf of industry and against an alternative “green” policy.41
In the end, the refutation of the acid rain–forest death hypothesis turned
out to be a gradual and slow process. Several large-scale monitoring projects
were initiated in the early 1980s, and after a few years they started to produce
empirical results, published in reports and scientific journals. An assessment
of the state of the art in the mid-1980s, published by the Nordic Council of
Ministers, concluded that the so-called cumulative stress hypothesis could
not be proven and that there were several possible explanations for the decline
observed in the Central European forest. Acid rain was not ruled out as a cul-
prit, but the main recommendation was more research on the dynamics of the
forest ecosystem and the establishment of more accurate survey methods to
avoid subjective assessments of decline.42 Within the European Communities
a forest damage inventory system was set up in 1987, and annual reports were
published. The results did not fit the alarming assessments made in the first
half of the 1980s, based on data from more scattered locations. During the
first three years of the inventory, no clear change in vitality was found for the
majority of species, and no increasing percentage of damaged trees could be
recorded. Furthermore, due to insufficient data, it was not possible to estab-
lish a relationship between air pollution and forest damage in the European
Community. At some more limited sites, where data were available, no such
relationships were found.43
A first policy shift came as early as 1985, when the German government
decided to downgrade the threat and changed the vocabulary from forest death
(Waldsterben) to forest damage (Waldschaden).44 With growing uncertainty
Anna Tunlid, “Ett Konfliktfyllt Fält: Förtroende Och Trovärdighet Inom Miljöforskningen
41
about both the extent and duration of the damage, as well as about the causal
link to acid rain, the issue gradually lost public and policy attention, though
in some countries like Switzerland and Sweden the perception was more
long-lived and upheld by persistent scientific consensus. In the Swedish case,
large-scale countermeasures had been initiated to prevent the development of
widespread damage to the forests. The acid rain–forest death hypothesis finally
collapsed 1992 when comprehensive surveys failed to reveal a continuous epi-
demic and instead could be seen as signaling an increased growth rate. In a
broader perspective, the alarming observations from the late 1970s appeared
as part of a normal fluctuation due to factors like weather and insect pests.45
In retrospect, the acid rain–forest death alarm in Western Europe stands
out as a major scientific failure, as well as a media-driven over-reaction. It is
hard to regard the rise and fall of the forest death concept as something other
than a flawed scientific warning. True enough, the warning did work, or
worked too well; the trouble was that it turned out to be unfounded, a situation
similar to cases of over-warning in intelligence, associated with a subsequent
cry-wolf syndrome.46 In one sense, though, the self-correcting mechanisms of
the scientific community did work; once more reliable and representative data
were available, the hypothesis was undermined and finally collapsed. Nils Roll
Hansen compares this with the Lysenko case where this process took three
decades, while in the forest death case it took only one.47 The self-correction
mechanisms of science were not eliminated as they had been under Stalin but
were nevertheless slow and retrospective, to some extent simply due to the
methods of data collection over time. Moreover, there were other distorting
mechanisms at work. One was scientific consensus and research funding. As
forest death exploded on the public agenda, it became more difficult for dis-
senting voices to be heard, and with large-scale research funding motivated
by the need to counter the threat, dissent was definitely not rewarded—or
rewarding. Scientific criticism was dampened and a kind of bubble effect
emerged, in which scientists could feel that it was almost their duty to prove
the existence of forest death.48
45
Ibid., pp. 4–5.
46
One of the well-known cases in intelligence literature is the successive US warnings for
war with Japan, starting in summer 1940 and by December 1941 resulting in a warning fatigue.
See Wohlstetter, Pearl Harbor: Warning and Decision. The slow Israeli response to the military
buildup prior to the October 1973 war is often explained with reference to the over-reaction in
connection with a crisis in Lebanon in May–June, resulting in costly and, as it turned out, unnec-
essary mobilizations.
47
Roll-Hansen, Ideological Obstacles to Scientific Advice in Politics, p. 36.
48
I bid., p. 22, quoting a study by the Swiss researcher Wolfgang Zierhofen on the forest death
issue in Switzerland.
72 National Intelligence and Science
The rise and collapse of the forest death hypothesis in several respects
resembles a classical intelligence failure. The combination of uncertainty and
potentially alarming consequences compelled the scientific community to
gear up and to communicate preliminary observations at an early stage. Once
this was done, media logic and policy momentum took over. The scientists had
to deliver and continue to deliver a process that raised the revision threshold as
the price of being wrong became too high. Uncertainty and consensus proved
to be an unfortunate combination, as well as the suspicion of being biased by
ideology or non-scientific partisan interests. Forest death never reached the
level of a paradigm, and the “normal science” conducted in the crash research
programs was predestined to sooner or later undermine the basic hypoth-
esis. Nevertheless, there were prolonged instances of paradigmatic defense
and rearguard actions by key researchers. One Swedish biologist maintained
that it was because of the warnings and subsequent countermeasures that the
environmental impact had turned out as favorable as it did, thereby probably
unknowingly walking in the footsteps of Sherman Kent dismissing the flawed
Cuban estimate on the grounds that events had proved that the CIA was right
and it was Khrushchev who had been in error! The Iraqi WMD case did not
allow for the same kind of last-ditch paradigmatic stand.
R ichard S. J. Tol, “Regulating Knowledge Monopolies: The Case of the IPCC,” Climatic
49
theories were challenged by the hypothesis that climate change might not come
about through a gradual process over thousands of years, but rather through sud-
den shifts over hundreds of years.50 One of the issues raised at an early stage was
the potentially catastrophic impact on the ocean level of a melting, or rather dis-
integration, of the so-called West Antarctic ice shelf.
However, in the 1970s the scientists were still divided as to the direction of
a sudden climate change: would it be a shift toward global warming or a new
ice age? At this stage the issue was still confined to an internal scientific dis-
pute; it concerned highly theoretical matters, and any potential impact was far
into the future. Climate change was in this sense invisible, and as such a public
non-issue. The extraordinary novelty in this case, observes Spencer R. Weart,
was that such a thing became a political question at all.51 This process started
in the mid-1980s. Assisted by a heat wave in the United States in the summer of
1988, scientists and politicians managed to get maximum public attention to a
congressional hearing, where the leading scientist James Hansen stated “with
99 percent confidence” that there was a long-term warming trend under way,
and that he strongly suspected the increased emission of carbon dioxide—the
greenhouse effect—to be behind this warming. 52
This transfer from the scientific to the public domain, and the politiciza-
tion in the original sense of the word (see Chapter 8), was due to several inter-
acting factors. A fundamental precondition was that the time was ripe, with
debates over nuclear winter and acid rain having paved the way. 53 And with the
nuclear arms race winding down, there was room for other global concerns.
Scientific and policy entrepreneurs also played a critical role in the process by
translating incomprehensible scientific data into visible threat perceptions and
media-friendly soundbites.
Climate research had from the very onset been multi- or rather
trans-disciplinary. The complex mechanisms could be analyzed neither
solely within any existing discipline nor through the combination of meth-
ods and theories from several existing disciplines. A new set of scientific
tools had to be created more or less from scratch, and this, the development
of climate modeling, became a central concern for IPCC, and the basis for
their successive assessments of warming, its implications, and subsequent
50
For the epistemology of the global warming theory, see Spencer R. Weart, The Discovery of
Global Warming (Cambridge, MA: Harvard University Press, 2003).
51
Ibid., p. 153.
52
Ibid., pp. 154–157.
53
A similar link was observed between forest death and acid rain, where the former func-
tioned as an eye-opener on the actual impact of acidification on the environment. Forest death
was thus “helpful” for the acidification question, providing a strong argument for further mea-
sures. Lundgren, Acid Rain on the Agenda, p. 289.
74 National Intelligence and Science
policy recommendations. 54 The IPCC itself was not a research institute but
rather has been described as a hybrid organization 55 overlapping science
and policy, with both scientists and governmental representatives. It has no
scientific staff and instead relies on a vast network of volunteer researchers,
thereby operating in a fashion similar to an nongovernmental organization
or NGO. IPCC’s work followed an intelligence-like production cycle, with
Assessment Reports as the major undertaking, published in 1990, 1995,
2001, 2007, and 2014. 56
The Intergovernmental Panel on Climate Change was created jointly
by the United Nations Environmental Program (UNEP) and the World
Meteorological Organisation (WMO), in many ways incorporating the
multi-lateral negotiation culture of the United Nations (UN) system and
other international organizations. IPCC was different mainly in the dominant
role played by the scientist network and the aim of producing assessment in a
joint process with scientists and government representatives engaged in the
review and approval process. 57 Any disputes on interpretations within the sys-
tem are therefore set to be solved mainly by closure through negotiations and
consensus. Failure to reach consensus would, as in the UN system, lead to a
non-result, in this case, the deletion of an assessment. 58
The creation of the IPCC resulted in a gradual termination of the wider sci-
entific controversy by closure. The Intergovernmental Panel was dominated
by scientists who not only came to agree on global warming as an increasingly
verified hypothesis but also on the causal link to human activities, and on a
common sense of duty to explain and communicate this to the policymakers
54
Intergovernmental Panel on Climate Change, Principles Governing IPCC Work (Vienna:
IPCC, 1998).
55
Z ehr, “Comparative Boundary Work,” pp. 454–455.
56
A preliminary report was published in September 2013 in Intergovernmental Panel on
Climate Change, Climate Change 2013—the Physical Science Basis: Working Group I Contribution
to the Fifth Assessment Report of the IPCC (Preliminary Report) (Cambridge: Cambridge
University Press, 2013).
57
There is a vast literature on the IPCC by participants and social scientists. Aspects of
“boundary work” between science and policy is dealt with by Zehr (2005), a critical account
of the dominating role of IPCC is given in Tol (2011), while a review of the research is provided
in Mike Hulme and Martin Mahony, “Climate Change: What Do We Know about the IPCC?”
Progress in Physical Geography 34, no. 5 (2010): 705–718.
58
One example of this is the growing uncertainty over estimates on the stability of the western
Antarctic ice sheet prior to the drafting of Assessment Report 2007 (AR4). The inability to reach
consensus on how to interpret new data for short- and long-term predictions of the impact on sea
level rise resulted in no prediction at all being delivered. See Jessica O’Reilly, Naomi Oreskes,
and Michael Oppenheimer, “The Rapid Disintegration of Projections: The West Antarctic Ice
Sheet and the Intergovernmental Panel on Climate Change,” Social Studies of Science 42, no. 5
(2012): 709–731.
Intelligence Challenges in Scientific R esearch 75
59
Hulme and Mahony, “Climate Change,” p. 711.
60
Weart, The Discovery of Global Warming, pp. 166–167.
61
I ntergovernmental Panel on Climate Change, “Principles Governing IPCC Work.”
62
Ibid.
63
Hulme and Mahony, “Climate Change,” pp. 710–712.
64
For the consensus building, see Ibid.
76 National Intelligence and Science
InterAcademy Council, Climate Change Assessments: Review of the Processes and Procedures
65
of the IPCC (Amsterdam: InterAcademy Council, 2010). In the Working Group I contribution
to the fifth assessment report (September 2013) there was a special section on the treatment of
uncertainties and a new and more precise terminology for describing likelihood was introduced.
IPCC (2013), Chapter 1, p. 18.
66
I ntergovernmental Panel on Climate Change, Climate Change 2013 – the Physical Science
Basis: Working Group I Contribution to the Fifth Assessment Report of the IPCC (Preliminary
Report) (New York: Cambridge University Press, 2013), pp. 17–18.
Intelligence Challenges in Scientific R esearch 77
Ibid., p. 19.
67
78 National Intelligence and Science
68
For a summary of the sociological discourse on uncertainty and risk, see Albert J. Reiss
Jr. “The Institutionalization of Risk,” in James F. Short Jr., and Lee Clarke, Organizations,
Uncertainties, and Risk (Boulder, CO: Westview Press, 1992).
69
The relationship between risk and threat in an intelligence and warning perspective is
excellently discussed in Mark Phythian, “Policing Uncertainty: Intelligence, Security and Risk,”
Intelligence and National Security 27, no. 2 (2012): 187–205.
70
Simon Shackley, “The Intergovernmental Panel on Climate Change: Consensual
Knowledge and Global Politics,” Global Environmental Change 7 (1997): 77–79. See also Tol,
“Regulating Knowledge Monopolies.”
Intelligence Challenges in Scientific R esearch 79
This chapter scans other domains, mostly but not exclusively in the academy, for
suggestive ideas to better understand intelligence. In many respects, physicians
resemble intelligence analysts: both have limited information and are in the
position of conveying judgments that are ultimately subjective to policy officials
(or patients) who find it hard to think of probabilities, especially low probabili-
ties of grave consequences. Intelligence typically looks to the methods of social
and hard science for rigor, even if it almost never has the opportunity to conduct
experiments. But a wide range of other domains are also suggestive. Archaeology,
for instance, faces the challenge of very limited data, and journalism confronts
the quandary of deciding when an account is validated enough to publish. Even
consumer products are suggestive: should intelligence analyses come with a list of
ingredients (in the form of methods) and perhaps even a “use by” date?
When one of us (Treverton) used the case of the swine flu pandemic in
the United States that never was, in 1976, as the introduction to a week-long
executive program for analysts from the CIA and other US intelligence agen-
cies, those analysts immediately saw the doctors as their counterparts. They
recognized that the physicians, like them, often were in the position of turning
their expertise into essentially subjective judgments about outcomes, some of
them very bad but many of them also extremely improbable. Walter Lacquer
described this dimension of judgment in positive terms:
1
Walter Laqueur, “The Question of Judgment: Intelligence and Medicine,” Journal of Contemporary
History 18, no. 4 (1983): 542, 45.
80
Exploring Other Domains 81
Medicine
In many respects, medicine does seem the closest parallel to intelligence. Not
only is the analyst-policy relationship much like the doctor-policy (or patient)
one; both doctors and analysts are dealing with matters that can, literally, involve
life or death. Neither is often in the position of being able to run controlled
experiments or even pilot projects. Both are “unscientific” in that neither is much
2
W ilhelm Agrell, “When Everything Is Intelligence, Nothing Is Intelligence,” in Kent Center
Occasional Papers (Washington, DC: Central Intelligence Agency, 2003), p. 3.
3
Ibid. p. 5.
4
Matthew Herbert, “The Intelligence Analyst as Epistemologist,” International Journal of
Intelligence and CounterIntelligence 19, no. 4 (2006): 769.
5
A grell, “When Everything Is Intelligence, Nothing Is Intelligence,” p. 6.
82 National Intelligence and Science
undergirded by usable theory. That is true for medicine despite long histories
of work by researchers in relevant disciplines: it may be that the human body
isn’t much simpler than the universe. In any case, notice how many important
drugs were discovered purely empirically, when a drug prescribed for one pur-
pose turned out to have positive effects for quite another malady. As a result, not
surprisingly, while more research has looked at various professions to attempt to
compare their characteristics to the field of intelligence analysis, the most care-
ful work has been done on medicine, specifically, and social science more gener-
ally. This section looks across that work, seeking to capture and summarize the
nature of the similarities among a few of the most commonly cited professions.
Not surprisingly given the overlaps, perhaps the most extensive compara-
tive research is that examining intelligence analysis in light of the field of
medicine. In 1983, Walter Laqueur observed quite simply that “the student
of intelligence will profit more from contemplating the principles of medical
diagnosis than immersing himself in any other field.”6 Stephen Marrin, who
has dedicated considerable effort to exploring other disciplines for insight into
intelligence analysis, perhaps more than any other single researcher, notes that
“intelligence analysis is similar to the medical profession in that it requires a
combination of skills acquired through practical experiences, and specialized
knowledge acquired through academic training.” 7
Moreover, Marrin notes, both fields “use approximations of the scientific
method—observation, hypothesis, experimentation, and conclusion— . . . to
organize and interpret information, and benefit from a variety of technologi-
cal tools to aid in discernment.”8 Each, nevertheless, “requires critical think-
ing and judgment to interpret the evidence that goes beyond what can be
quantified or automated.”9 His Improving Intelligence Analysis synthesizes the
research on the similarities of medicine and intelligence; in a 2005 paper on
the same subject area, he and Jonathon D. Clemente propose, and justify, that
“processes used by the medical profession to ensure diagnostic accuracy may
provide specific models for the Intelligence Community to improve accuracy
of analytic procedures.”10
6
L aqueur, “The Question of Judgment.” p. 543.
Stephen Marrin, “Intelligence Analysis: Turning a Craft into a Profession,” International
7
11
L aqueur, “The Question of Judgment,” p. 535.
12
Stephen Marrin, Improving Intelligence Analysis: Bridging the Gap between Scholarship and
Practice (London: Routledge, 2011), p. 108.
13
Ibid., p. 108.
14
Ibid., p. 113, and Matthew Mihelic, “Generalist Function in Intelligence Analysis”.
Proceeding from the 2005 International Conference on Intelligence Analysis, available at
https://www.e-education.psu.edu/.../Generalist%20...
15
Marrin, Improving Intelligence Analysis. Also, Stephen Marrin, “Best Analytic Practices
from Non-Intelligence Sectors,” ed. Analytics Institute (2011).
84 National Intelligence and Science
For the most part, physicians must fit the signs and symptoms together
into a hypothesis informed by theory . . . but in both cases ambiguous
information and circumstances require critical thinking and judg-
ment in order to come to conclusions regarding the accuracy of the
hypothesis and its implication for either national security interests or
the patient’s well-being.16
Yet Laqueur cautions: “No two individuals react alike and behave alike
under the abnormal conditions which are known as disease . . . this is the
fundamental difficulty in the education of a physician.”17 Likewise, the intel-
ligence analyst faces many, if not more, of the same uncertainties. For this rea-
son, as Marrin suggests, “sometimes intuition is used rather than transparent
structured methods visible to outsiders. Assessment in both fields involves
the application of cognitive heuristics as convenient shortcuts, which helps
achieve accuracy in some cases but can hurt in others.”18
Deception would seem one area in which medicine would differ sharply
from intelligence. Lacquer does note that “the patient usually cooperates
with the medical expert.” Marrin, however, focuses on similarities behind the
apparent differences:
16
Marrin, Improving Intelligence Analysis, p. 109.
17
L aqueur, “The Question of Judgment,” p. 536.
18
Marrin, Improving Intelligence Analysis, p. 111.
19
Ibid., p. 120.
Exploring Other Domains 85
Social Science
Intelligence analysts bear strong similarity to social scientists when they create,
evaluate, and test hypotheses as part of a rigorous, structured approach to analy-
sis.22 Indeed, comparison between intelligence analysis and social science has a
long legacy. It was perhaps Sherman Kent who first noted the similarity, at least
in the American context of a formalized intelligence structure, in 1949: “most
of the subject matter of intelligence falls in the field of the social sciences.”23
Conceptual models used in the deductive process of intelligence analysis derive
from what Roger Hilsman described as “the way that the social sciences adapt
the scientific method to create and test hypotheses . . . to derive meaning from
the accumulated data.”24 Washington Platt echoed these sentiments, noting the
extent to which intelligence analysis could be improved by relying on the social
20
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
21
See David L. Sackett, “Evidence-Based Medicine,” Seminars in Perinatology 21, no. 1
(1997): 3–5.
22
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
23
Kent, Strategic Intelligence for American World Policy (Princeton, NJ: Princeton University
Press, 1949), p. 175.
24
Marrin, Improving Intelligence Analysis, p. 24, originally from Roger Hilsman, “Intelligence
and Policy-Making in Foreign Affairs,” World Politics 5 (1952): 1–45.
86 National Intelligence and Science
sciences; he suggested that “In nearly all problems confronting the intelligence
officer, some help, even if not necessarily a complete answer, is available from
those who have already wrestled with similar questions.”25
In Improving Intelligence Analysis, Marrin too addresses the trajectory of
social science and its applicability to intelligence analysis and argues that
“intelligence analysis is rooted in the methodologies and epistemologies of
the social sciences.”26 For Marrin, it was Klaus Knorr who best evaluated how
analysts use social science. Knorr’s view held that “the central task of the intel-
ligence officer, historian, and social scientist is to fit facts into meaningful pat-
terns, thus establishing their relevance and bearing to the problem at hand.”27
Moreover, Knorr’s account of the suitability of “social science methods of gath-
ering data, of deducing data from other data, and of establishing the validity of
data that are of particular value . . . in producing appropriate kinds of informa-
tion for intelligence” is apt.28
Through the years, a series of initiatives have sought to impel intelligence
analysis to rely more, and more systematically, on the best of social science
methods. For instance, in 1999, the special assistant for intelligence programs
at the US National Security Council, Mary McCarthy, suggested that aca-
demia develop an arsenal of social scientific methodologies—later termed
“structural analytic techniques”—so that analysts could “understand how
thinking can be done and how methodologies work.”29 Using formal methods
had the potential to enable an analytic audit trail, through which “analysts and
their colleagues can discover the sources of analytic mistakes when they occur
and evaluate new methods or new applications of old methods.”30
As both the study and practice of intelligence analysis incorporates more
explicit reference to social science methodology, greater emphasis is being
placed on structured analytical techniques as a mechanism for embedding
social science methodologies within analytic practices. Optimally, structured
analytic techniques provide analysts with simplified frameworks for doing
analysis, consisting of a checklist derived from best practices in application
of the scientific method to social issues. 31 More recently, a major National
25
W. Platt, Strategic Intelligence Production: Basic Principles (New York: F. A. Praeger, 1957),
p. 133.
26
Marrin, Improving Intelligence Analysis, p. 28.
27
K. E. Knorr, Foreign Intelligence and the Social Sciences (Princeton, NJ: Center of
International Studies, Woodrow Wilson School of Public and International Affairs, Princeton
University, 1964), p. 23.
28
Ibid., p. 11.
29
Marrin, Improving Intelligence Analysis, p. 31.
30
Ibid.
31
Ibid., p. 32.
Exploring Other Domains 87
Academy of Sciences study in the United States concluded that the best way
to improve intelligence analysis would be to make more, and more systematic
and self-conscious, use of the methods of social science. 32
To be sure, social scientists differ, and so do their methods. When the intel-
ligence task is piecing together fragmentary information to construct pos-
sible explanations under given circumstances, intelligence analysis appears
very similar to the work of historians. 33 Historians describe and explain while
crafting a portrayal of the past. Yet they work from an archive as their source
material, which is synthesized to support a perspective. Historian John Lewis
Gaddis compares history to geology, paleontology, and astronomy because the
scientists are unable to directly evaluate their subject matter, just as is the case
in intelligence analysis. Moreover, in recognizing this, some observers have
asked, “What can we learn from how academic history is done, with respect
to the field of intelligence?” Can evaluating the quality of history (as a social
science discipline) suggest lessons for evaluating the quality of intelligence?34
What most obviously distinguishes intelligence from social science is, as
discussed in Chapter 2, the institutional context. Intelligence analysis as pro-
duced in governments has an active audience of policy officials looking for
help in making decisions. It also has access to information that was collected
secretly, including ways that are illegal in the jurisdictions in which it occurred.
And because intelligence is seeking to give one nation a leg up in international
competition, it is subject to deception from the other side in a way social sci-
ence is not. Those from whom social scientists seek information—in a survey
instrument, for example—may be reluctant or too easily swayed by fashion in
what they “ought to” think. They are probably less likely, however, to tell out-
right lies, at least beyond those questions that might reveal behavior that was
embarrassing or illegal—and for those questions the social scientist is fore-
warned of unreliability.
Yet purpose is perhaps a greater distinguisher of intelligence from social
science. Not only does intelligence have an interested audience, in principle at
least, but it also usually is required to be predictive. Explanation may be enough
for social science, and it is fundamental to intelligence analysis as well. But it is
usually not enough for the policy officials who need to act. Stéphane Lefebvre
offers a description of intelligence analysis that could, institutional context
aside, characterize social science as well:
32
Committee on Behavioral and Social Science Research to Improve Intelligence Analysis for
National Security, National Research Council, Intelligence Analysis for Tomorrow: Advances from
the Behavioral and Social Sciences (National Academies Press, 2011).
33
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
34
Ibid.
88 National Intelligence and Science
To be sure, the differences between intelligence analysis and social science are
considerable. Academic social scientists usually have considerable discretion in
the problems they address. Not so, intelligence analysts who, ideally, are meant
to be in the position of responding to requests from policy officials, though more
often in fact they are addressing issues their office thinks are on the agenda. In
either case, they lack the discretion of academics. So, too, do they usually lack
the leisure of time; often, the answer is desired yesterday. The last difference is
probably also the biggest, and it is one that runs through this book. Academic
social scientists have robust data sets much more often than intelligence analysts;
indeed, given discretion, the social scientists can choose topics on which they
know there are data. By contrast, intelligence analysts live in a world of spotty
data, usually collected opportunistically; it is likely to amount to a biased sample,
though biased in ways that analysts may not know, making it impossible to judge
how much the sample can be applied more generally to a broader population. As
a result, intelligence analysts generally cannot employ the methods of social sci-
ence in a robust way.40
In “The Intelligence Analyst as Social Scientist: A Comparison of Research
Methods,” Henry W. Prunckun Jr. offers an intriguing account of the differences
39
John D. Steinbruner, Paul C. Stern, and Jo L. Husbands, Climate and Social Stress: Implications
for Security Analysis (National Academies Press, 2013).
40
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
90 National Intelligence and Science
46
L aqueur, “The Question of Judgment: Intelligence and Medicine,” pp. 542, 45.
47
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
48
For an assessment of the method in Iraq and Afghanistan, see Montgomery McFate and
Steve Fondacaro, “Reflections on the Human Terrain System During the First 4 Years,” Prisms
2, no. 4 (2011).
49
As reported in David Rohde, “Army Enlists Anthropology in War Zones,” New York Times (2007),
http://www.nytimes.com/2007/10/05/world/asia/05afghan.html?incamp=article_popular_
4&pagewanted=all&_r=0.
50
Montgomery McFate, “Anthropology and Counterinsurgency: The Strange Story of Their
Curious Relationship,” Military Review 85, no. 2 (2005): (pages not marked).
51
H . Russell Bernard et al., “The Construction of Primary Data in Cultural Anthropology,”
Current Anthropology 27, no. 4 (1986): 383.
52
Montgomery McFate and Steve Fondacaro, “Cultural Knowledge and Common Sense,”
Anthropology Today 24, no. 1 (2008): 27.
Exploring Other Domains 93
53
For a case study of pre–World War I employment of a Western explorer, see David
W. J. Gill, “Harry Pirie-Gordon: Historical Research, Journalism and Intelligence Gathering
in Eastern Mediterranean (1908–18),” Intelligence and National Security 21, no. 6 (December
2006): 1045–1059.
54
McFate, “Anthropology and Counterinsurgency,” page not marked. This claim is originally
sourced to Arthur Darling, “The Birth of Central Intelligence,” Sherman Kent Center for the
Study of Intelligence, online at www.cia.gov/csi/kent_csi/docs/v10i2a01p_0001.htm.
55
Ibid., pages not marked. Note: McFate’s telling of this narrative and the relationship between
anthropologists and the national security establishment received criticism: Jeffrey A. Sluka,
“Curiouser and Curiouser: Montgomery Mcfate’s Strange Interpretation of the Relationship
between Anthropology and Counterinsurgency,” PoLar: Political and Legal Anthropology Review
33, no. s1 (2010): 99–115.
56
For a once famous example of this controversy, see The Rise and Fall of Project Camelot.
Studies in the Relationship between Social Science and Pratical Politics, ed. Irving Louis Horowitz
(Cambridge, MA: MIT Press, 1967).
57
H. Russell Bernard, Handbook of Methods in Cultural Anthropology (Walnut Creek,
CA: AltaMira Press 1998), p. 313.
94 National Intelligence and Science
58
H. Russell Bernard, Research Methods in Anthropology: Qualitative and Quantitative
Approaches (Walnut Creek, CA: AltaMira Press, 2006), p. ix.
59
Bernard et al., “The Construction of Primary Data in Cultural Anthropology,” Current
Anthropology 27, no. 4 (1986): 382.
60
Bernard, Research Methods in Anthropology, p. 211.
61
Ibid., p. 213.
62
Bernard, Handbook of Methods in Cultural Anthropology, p. 268.
63
Bernard, Research Methods in Anthropology, pp. 354, 68, 69.
64
Bernard, Handbook of Methods in Cultural Anthropology, p. 314. In the field of intelligence,
though, the collection method chosen is that which is most likely to reveal/acquire the desired
information.
Exploring Other Domains 95
of the intelligence analyst and case officer. While these similarities exist,
intelligence and anthropology diverge in important ways; the two face dif-
ferent constraints. For instance, anthropologists are constrained by the
ethical guidelines of the profession. These scientists are meant to observe
and understand with as little disruption or influence on their subjects as
possible. By contrast, HUMINT case officers, in particular, are both taught
and encouraged to manipulate situations, people, and their environment to
acquire information; moreover, they often provide monetary compensation
to sources in exchange for cooperation. While aspects of intelligence might
be considered a subset of anthropology, intelligence employs the practice of
“tasking” a source to acquire specific information, which is not a character-
istic of anthropology. A similarity is that anthropological informants might
desire anonymity and confidentiality in the interaction with anthropologists,
just as human sources usually want to hide their association with intelligence
from public knowledge. 65
Moreover, traditional anthropological research is often guided by the
methodological stricture to acquire a representative sample of the target pop-
ulation and may seek to generalize findings across the population/subgroup.
Anthropologists thus often choose informants who seem representative, or
can provide a “grand tour” of the cultural landscape but can and will incor-
porate the perspective of an isolated group member who maintains the perch
of an outsider.66 By contrast, intelligence analysts typically work with infor-
mation that is from potentially a biased sample, frequently acquired oppor-
tunistically, with no way to determine the generalizability to the broader
population, and often with no particular interest in doing so.67 The collection
of intelligence is principally driven by targeting people or groups who can pro-
vide privy information on the actions of specific individuals, events, or other
information, and the likelihood of specific decisions or actions among these
individuals. Targeted intelligence sources might be considered key informants
in the anthropological sense, though the primary selection criteria are access
to information within a particular context.68 The degree of cooperation and
the information to which the source has access will determine what can actu-
ally be collected.
In sum, the defining criterion for a target for intelligence is that tar-
get individual’s access to specific information needed to fill a gap in
knowledge. With the increase in “privy-ness” of information comes less
65
Bernard, Research Methods in Anthropology, p. 315.
66
Bernard, Handbook of Methods in Cultural Anthropology, p. 313.
67
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
68
Bernard, Research Methods in Anthropology, p. 196.
96 National Intelligence and Science
certainty in evaluating its credibility, and thus greater risk when making
judgments based upon that information. 69 As with other social sciences,
anthropological research often seeks representativeness (generalizabil-
ity), and might be doctrinally inclined to defer to community consensus
or formal tests of validity, whereas the world of intelligence is willing to
entertain the notion that there is such a thing as information so good it
can’t be verified. Because the nature of intelligence information desired
in many instances is specific with respect to terror organizations or
privy information, it is available from only a small population and an
even smaller sample who might willingly reveal information should they
have access to it. Sources are targeted specifically because the intelli-
gence organization believes the source has access to the desired infor-
mation. Anthropology can face the same dilemma; H. Russell Bernard
indicates that “In depth research on sensitive topics requires nonproba-
bility sampling.” 70 However, non-probability sampling leads to potential
bias and collects information from a sample that provides only a limited
perspective.
In this respect, both intelligence and anthropology struggle with the absence
of a worthy mechanism for verifying credibility, informant accuracy, and infor-
mant reliability.71 The nature of this problem is confounded for intelligence,
which sees a wide variety of motivations for the cooperation it receives from
sources. Anthropology faces is own variety of other problems.72 Knowledge tests
or falsification tests can help to assess the state of knowledge of the source or
respondent.73 For intelligence, the motivations of informants range from patrio-
tism and ethical views, to money, revenge, intent to deceive, or any number of
other reasons. The accuracy of information provided by different informants
will differ widely. Access is a critical difference for intelligence. So is deception,
69
An article in The New Yorker magazine regarding the prosecution of war crimes in Kosovo
provides an excellent example of the type of investigation and analysis that an intelligence opera-
tor or analyst might confront; moreover, it also provides an excellent example of why quantitative
techniques are often not useful in this context. Nicholas Schmidle, “Are Kosovo’s Leaders Guilty
of War Crimes?” The New Yorker, May 6, 2013.
70
Bernard, Research Methods in Anthropology, pp. 186, 217–222. Also, a note on elicita-
tion: Depending on the nature of the relationship (in an intelligence and anthropological con-
text), and how well developed it might be, intelligence personnel and anthropological people
might both use a variety of probes to get the source to provide more information or context about
a particular matter of interest. Additionally, both will need to be capable of directing a conversa-
tion gracefully toward the issues of interest.
71
A n element of this mindset seems to be present in the Bob Drogin’s telling of the story
with respect to the intelligence reporting regarding the presence of WMD in Iraq. Bob Drogin,
Curveball: Spies, Lies, and the Con Man Who Caused a War (New York: Random House, 2007).
72
Bernard, Research Methods in Anthropology, p. 245.
73
Bernard, Handbook of Methods in Cultural Anthropology, p. 379.
Exploring Other Domains 97
74
Bernard, Research Methods in Anthropology, p. 200. In a separate article, Bernard et al.
concluded from a survey of research regarding informant accuracy that “on average, about half
of what informants report is probably incorrect in some way.” Also, H. Russell Bernard et al.,
“The Problem of Informant Accuracy: The Validity of Retrospective Data,” Annual Review of
Anthropology 13 (1984): 495–517.
75
I rwin Deutscher, “Words and Deeds: Social Science and Social Policy,” Social Problems
13(1965): 236.
76
Ibid., p. 235.
77
Ibid.
78
Ibid., p. 237.
98 National Intelligence and Science
Journalism
When intelligence analysts explain the facts, driving factors, and potential out-
comes of a fluid situation, a current intelligence briefing can appear similar to
what journalists do.79 When one of us (Treverton) was running a workshop for
US intelligence analysts, he invited a distinguished correspondent from the
Cable News Network (CNN) to speak. The correspondent began by reversing
the usual caveats: please protect me, he said, because you and I are in the same
business, and we run the same dangers. Indeed, in many parts of the world, they
think I am you, an intelligence operative. The production of current intelligence
and the corresponding skills necessary to skillfully evaluate the significance of
an event, and communicate it with appropriate accuracy, is in many respects
79
Marrin, “Best Analytic Practices from Non-Intelligence Sectors.”
Exploring Other Domains 99
80
Loch Johnson’s article nearly three decades ago addressed the CIA’s use and misuse of jour-
nalists. Along the way, though, it drives home how kindred intelligence and journalism are. See
his “The CIA and the Media,” Intelligence and National Security 1 (2) May 1986: 143–169.
81
Kent, Strategic Intelligence for American World Policy.
82
Michael Flynn, Fixing Intel: A Blueprint for Making Intelligence Relevant in Afghanistan
(Washington: Center for a New American Century 2010).
83
R obert Dover and Michael S. Goodman, eds., Spinning Intelligence. Why Intelligence Needs
the Media, Why the Media Needs Intelligence (New York: Columbia University Press, 2009), p. 7.
100 National Intelligence and Science
the advent of modern media, used journalists as sources, informers, and covers but
also as a professional pool for recruitment, based on the fact that journalistic skills
are so similar to those intelligence usually seeks. Just how close the skills are is per-
haps best illustrated by one of the 20th century’s best-known intelligence officers
and spies, Kim Philby, recruited by the British intelligence after proving his abil-
ity as a journalist covering the Spanish civil war on the Franco side (incidentally
encouraged to do so by his NKVD controllers in order to become a suitable candi-
date for recruitment on the Secret Intelligence Service, the SIS) .84 As an intelli-
gence officer, and as an agent for the KGB, the Soviet intelligence agency, Philby
displayed a remarkable ability to socialize, build networks, and seize opportuni-
ties, in much the same way an experienced foreign correspondent would operate.
But Philby also commanded the journalistic core competence, that of reporting.
Some of his written reports to his Soviet masters have emerged out of the former
KGB archive and constitutes if not the best so at least the best-written and most
comprehensible description of the British wartime intelligence labyrinth.85
The “blood brothers” are however not always on cooperative or even speaking
terms. Intelligence tends to regard media output as incomplete, shallow, or sim-
ply flawed. Media have, in their view, their own agenda, which is true—but so has
intelligence as we discuss in Chapter 8. It is also true that media usually gives pride
of place to rapid dissemination rather than in-depth analysis and long-term assess-
ments. It is very much a real-time activity. Yet so, often, is current intelligence,
whether in an operational context in a conflict zone or in various forms of fusion
center. The main difference is of course the purpose, but core competences also
differ; in the media world, importance is on the ability of journalists to sense and
produce a good story without any undue delay, or if necessary in real-time. In some
respects, news media are so efficient that intelligence simply use them as an infor-
mation baseline, especially in rapidly evolving situations in a physical space where
media have superior coverage and can rely on crowd-sourcing.86 The advent of the
24-hour news cycle has, according to Dover and Goodman, moved intelligence
agencies on both side of the Atlantic closer to media outlets, not only in the sense
of updating, but also in terms of managing their own relations with the public.87
On the recruitment of Philby, see Nigel West and Oleg Tsarev, The Crown Jewels
84
on collaborative tools in the intelligence community looked at the New York Times
as a representing a profession kindred to intelligence.88 The New York Times is not
a cutting-edge social media practitioner. No wikis,89 no micro-blogs, no fancy or
modern in-house tool development are apparent. It encourages collaboration but
relies on email. Yet all the people at the Times—the journalists, editors, and man-
agers—stress how fundamentally journalism has changed in the past decade or
so. Here are some of these ways, all of them suggestive for intelligence:
• Collaboration is the name of the game today. Most bylines are for multiple
authors, often from different locations.
• Speed has forced the Times and its competitors to push to the edge of their
comfort zone with regard to accuracy. The Times stressed that, perhaps
more than its smaller and less-established competitors, it often had to give
more weight to being right than being quick, and as a result often lost out to
other media organizations in the race to publish first. Nevertheless, it, too,
has seen an increase in its resort to the “correction” page.
• Editing on the fly is imperative in these circumstances.
• Evaluation, career development, and incentives have kept pace with the chang-
ing nature of the business. The paper takes pains to measure contribution,
not just solo bylines, and it values a blog posting that attracts readership as
much as a print article on the front page.
• Changing impact measures are changing value propositions. Appearing above
the fold in the print version of the paper is still regarded as the place of honor
among journalists; another enviable distinction is making the Times “10
most emailed” list.
• Content/version control are pervasive and central to the publication process.
The ability to track versions, attribute to multiple authors, and control ver-
sion aimed at print or e-publication is critically important.
• Customer knowledge perhaps most distinguishes the Times from the intelli-
gence community, as Dover and Goodman suggest. It knows its readership
exquisitely, to an extent most intelligence agencies can only dream about.
Weather Forecasting
When one of us (Treverton) was running the US National Intelligence
Estimates (NIE) process, he took comfort from at least the urban legend that
Gregory F. Treverton, New Tools for Collaboration: The Experience of the U.S. Intelligence
88
predictions of continuity beat any weather forecaster: if it was fine, predict fine
weather until it rained, then predict rain until it turned fine. He mused, if those
forecasters, replete with data, theory, and history, can’t predict the weather,
how can they expect us to predict a complicated human event like the col-
lapse of the Soviet Union?90 And when intelligence analysts evaluate a devel-
oping situation, contingent factors, and warning indications, they do resemble
weather forecasters.91 At least, the comparison “has been useful as a way to
frame failure defined as inaccuracy in both fields.”92 In Marrin’s words:
90
See Gregory F. Treverton, “What Should We Expect of Our Spies,” Prospect, June 2011.
91
Marrin, Improving Intelligence Analysis.
92
Ibid., p. 103.
93
Ibid.
94
Ibid., p. 104.
104 National Intelligence and Science
Archaeology
Archaeology offers an intriguing analogy because it focuses on collection
and its relation to analysis, while most of the other analogies speak more to
analytic methods, though sometimes in the presence of fragmentary data or
deception. It is a nice antidote to the general proposition that intelligence
failures are analytic, a failure in mindset or in “connecting the dots,” in that
awful Jominian phrase. Yet “intelligence failures are indeed a problem of col-
lection, because collectors are seldom able to produce the substantial quanti-
ties of relevant, reliable data necessary to reduce uncertainty.”96 Archaeology
is most akin to intelligence as puzzle solving. For it, the puzzle is “what was that
civilization (or tribe or village) like?” The information it has is inevitably frag-
mentary, literally the shards of implements or decoration or other objects that
have survived the centuries, sometimes the millennia. Without some careful
process for dealing with these remains, the archaeologist will be tempted to
take creative leaps from shards to conclusions about the civilization, leaps that
may be right but are likely to be wrong.
David Clark outlines that process as one of four steps:
1. What is the range of activity patterns and social and environmental processes
that once existed—that is, what the archaeologist seeks to understand?
2. What sample and traces of these would have been deposited at the time?
3. What sample of that sample actually would have survived to be recovered?
See Matthew C. Pritchard and Michael S. Goodman, “Intelligence: The Loss of Innocence,”
96
I raq had pursued multiple uranium enrichment technologies, including a centrifuge
101
program and the outdated electromagnetic isotope separation (EMIS) process. See National
Intelligence Council, “How the Intelligence Community Arrived at the Judgments in the
October 2002 NIE on Iraq’s WMD Programs,” p. 7.
Exploring Other Domains 107
Coda: Neuroscience
This chapter has scanned across other intellectual domains for comparisons
that will be suggestive for intelligence. In one area of science, however, devel-
opments might directly affect how intelligence does its work by enhancing
capacity. That area is neuroscience. Since at least the Cuban Missile Crisis
and the recognition of the problem of “groupthink,” systematic attempts have
been made to incorporate the lessons of psychology and social psychology into
each step of the intelligence process.102 Scholars have identified psychological
heuristics and biases that people, including analysts, use to assess their envi-
ronment and to estimate the probability that certain events might occur—
tools that often lead to incorrect estimates.103 Scholars have also used insights
from social psychology to design processes—such as red teaming, alternative
analyses, and scenario building—in order to “break out” of groupthink, con-
ventional wisdoms, or analytic mindsets. Theorists and practitioners of intelli-
gence have shown a marked sensitivity to the insights of the cognitive sciences,
even if the systems they design are, like all human institutions, still flawed.
However, recent advances in cognitive and social neuroscience have been
slower to filter into these discussions of the psychology of intelligence.104
A wide-spectrum search of the intelligence literature found no articles ref-
erencing, except in passing, contemporary neuroscience research. A search
of the neuroscience and neuropsychology literatures yields virtually nothing
about intelligence gathering, analysis, or consumption, or national security.
One of the few to look at these issues is bioethicist Jonathan Moreno at the
University of Pennsylvania.105 He summarizes some of the developments
in neuroscience being explored for national security purposes, and he pro-
poses ethical guidelines for using advanced neuroscience and neuroscience
102
Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes
(Boston: Houghton Mifflin, 1972). The classic study applying the insights from psychology to
intelligence is Richards J. Heuer, “Psychology of Intelligence Analysis” (Washington, DC: Center
for the Study of Intelligence, Central Intelligence Agency, 1999).
103
R ose McDermott, “Experimental Intelligence,” Intelligence and National Security 26, no. 1
(2011): 82–98.
104
One admirable effort to update Richards Heuer came to naught due to the CIA’s internal
sensitivities. Led by a psychologist, Richard Rees, the effort produced Handbook of the Psychology
of Analysis. Cleared as unclassified, the CIA still decided not to publish the book—because “offi-
cial” publication by the CIA was thought to sanction comments that were critical, albeit intended
as constructive criticism. The result was thus the oddity of an unclassified book that is available
only on the CIA’s highly classified computer system.
105
See Michael N. Tennison and Jonathan D. Moreno, “Neuroscience, Ethics, and National
Security: The State of the Art,” PLoS Biology 10, no. 3 (2012).
108 National Intelligence and Science
serum” warrants caution. As one study nicely put it: “the urban myth of the
drugged detainee imparting pristine nuggets of intelligence is firmly rooted
and hard to dispel.”107 The historical track record of abuse and gross violation
of human rights in the wake of attempts to refine methods to extract informa-
tion remains a moral downside in intelligence.
Human Rights Hazard,” American Journal of Law & Medicine 33 (2007): 483–500.
6
This chapter seeks to shed light on the future of intelligence by looking at com-
mon issues that arise both for it and for the practice of medicine, as well as for
policy analysis as it is being developed in social science departments and pub-
lic policy schools of academia. The most visible common issue for both intelli-
gence analysis and policy analysis is uncertainty. As major policy issues become
bigger, messier, and more interconnected, uncertainty about policy approaches
increases. That uncertainty is endemic to intelligence analysis. In that sense,
other realms are becoming more like intelligence, or perhaps uncertainty is
growing in all of them apace. What uncertainty, risk, and probability mean in
the two areas need to be decoded. For the second core issue in common—what
the military calls “blue,” our own actions—the timing goes in the other direc-
tion. Almost by definition, if policy science is addressing a big issue, “we,” those
who will be affected by it, will be a major consideration in the analysis; in cli-
mate change, for instance, the United States is far and away the largest con-
tributor of carbon per capita. For intelligence, however, the emphasis on blue
is newer. Especially for the United States, “intelligence” meant “foreign intel-
ligence” and stopped at the water’s edge, as Chapter 3 discussed. Now, though,
terrorists and other transnational threats are asymmetric, seeking “our” vulner-
abilities, and thus those threats cannot be understood without analysis of us,
turning intelligence assessment into “net assessment” very much akin to what
policy science does for big policy issues. A third common core issue is new for
both, though much more awkward for intelligence: the increasing transparency
of a world where our cellphones locate us wherever we are and our contrails
across the web reveal details about us down to our taste in good cigars or bad
pictures. Policy science in many places has lived in a world of required open
meetings, but the change in degree is notable when, for example, social media
allow constituents to share views (and organize protests) instantaneously. For
intelligence, by nature closed and passive, it is a sea change.
110
Common Core Issues 111
1
Th is account is from Richard E. Neustadt and Harvey V. Fineberg, “The Swine Flu
Affair: Decision-Making on a Slippery Disease,” in Kennedy School of Government Case C14–80–
316, Harvard University, Cambridge, MA.
112 National Intelligence and Science
A s quoted in “Swine Flu (E): Summary Case, Kennedy School of Government Case C14–
2
16, the president reluctantly agreed to end the program. With the exception of
the Fort Dix recruit, no one else had died of swine flu.4
Whether swine flu would become a pandemic was uncertain, but that uncer-
tainty was relatively straightforward. In retrospect, a sensible strategy might
have hedged, waiting to improve the probability estimates, perhaps stockpil-
ing vaccine but only beginning vaccinations when signs of spread appeared.
Had it become a pandemic, it would have become a global concern, but it was
relatively bounded. “Jet spread” of disease was possible—and even the 1918
flu had become worldwide—but was not yet ubiquitous and nearly instanta-
neous. The severe acute respiratory syndrome (SARS), another virus, shows
the contrast 30 years later, in complexity and global spread of both disease and
possible responses.
In the near-pandemic that occurred between November 2002 and July
2003, there were 8,096 known infected cases and 774 confirmed human
deaths. This resulted in an overall case-to-fatality rate of 9.6 percent, which
leaped to 50 percent for those over 65. By comparison, the case-fatality rate for
influenza is usually less than 1 percent and primarily among the elderly, but it
can rise many-fold in locally severe epidemics of new strains. The 2009 H1N1
virus, which killed about 18,000 people worldwide, had a case-fatality rate no
more than.03 percent in the richer countries. 5
SARS spread from Guangdong province in southern China, and within
a matter of weeks in 2002 and early 2003 had reached 37 countries around
the world. On April 16, the UN World Health Organization (WHO) issued
a press release stating that a coronavirus identified by a number of laborato-
ries was the official cause of SARS; the virus probably had originated with
bats and spread to humans either directly or through animals held in Chinese
markets. Once the virus was identified, every health professional became, in
effect, an intelligence collector on the disease. WHO set up a network dealing
with SARS, consisting of a secure website to study chest x-rays and to conduct
teleconferences.
The first clue of the outbreak seems to have appeared on November 27,
2002, when a Canadian health intelligence network, part of the WHO Global
Outbreak and Alert Response Network (GOARN), picked up reports of a “flu
outbreak” in China through Internet media monitoring and analysis and sent
them to the WHO. WHO requested information from Chinese authorities on
4
It turned out that when the recruit collapsed, his sergeant had given him mouth-to-mouth
resuscitation without contracting swine flu!
5
The fatality number is from the World Health Organization; the case fatality numbers from
studies in Britain and the United States. The WHO acknowledges that many more people may
have died from the flu, mostly in Africa, isolated from both treatment and accounting.
114 National Intelligence and Science
December 5 and 11. It was not until early April that SARS began to receive
much greater prominence in the official media, perhaps as the result of the
death of an American who had apparently contracted the disease in China
in February, began showing symptoms on the flight to Singapore, and died
when the plane diverted to Hanoi. In April, however, accusations emerged
that China had undercounted cases in Beijing military hospitals, and, under
intense pressure, China allowed international officials to investigate the situa-
tion there, which revealed the problems of an aging health care system, includ-
ing increasing decentralization, red tape, and weak communication. WHO
issued a global alert on March 12, followed by one from the US Centers for
Disease Control and Prevention.
Singapore and Hong Kong closed schools, and a number of countries
instituted quarantine to control the disease. Over 1,200 people were under
quarantine in Hong Kong, while in Singapore and Taiwan, 977 and 1147 were
quarantined, respectively. Canada also put thousands of people under quar-
antine. In late March, WHO recommended screening airline passengers for
SARS symptoms. Singapore took perhaps the most extreme measures, first
designating a single hospital for all confirmed and probable cases of the dis-
ease, then requiring hospital staff members to submit personal temperature
checks twice a day. Visiting at the hospital was restricted, and a phone line
was dedicated to report SARS cases. In late March, Singapore invoked its
Infectious Diseases Act, allowing for a 10-day mandatory home quarantine
to be imposed on all who might have come in contact with SARS patients.
Discharged SARS patients were under 21 days of home quarantine, with tele-
phone surveillance requiring them to answer the phone when randomly called.
On April 23, WHO advised against all but essential travel to Toronto, not-
ing that a small number of persons from Toronto appeared to have “exported”
SARS to other parts of the world. Toronto public health officials noted that
only one of the supposedly exported cases had been diagnosed as SARS
and that new SARS cases in Toronto were originating only in hospitals.
Nevertheless, the WHO advisory was immediately followed by similar adviso-
ries from several governments to their citizens, and Toronto suffered losses of
tourism. Also on April 23, Singapore instituted thermal imaging screens on all
passengers departing from its airport and also stepped up screening at points
of entry from Malaysia. Taiwan’s international airport also installed SARS
checkpoints with an infrared screening system similar to the one in Singapore.
The last reported SARS case in humans was June 2003, though the virus may
remain in its animal hosts.
It took more than three months from first information about the disease
to a global alert. It was then another month until the virus was clearly identi-
fied. The time delay may have had something to do with China’s dissembling
Common Core Issues 115
about the extent of the disease, but it also demonstrates that the cause of any
outbreak—whether natural or terrorist—may take some time to identify.
Once the virus was identified, however, virtually every health care profes-
sional in the world became a potential collector of intelligence on the disease,
illustrating the other side of the complexity coin; this is suggestive for intel-
ligence as well. The global network was a form of “crowd-sourcing,” though
the label was not used at the time. The worldwide web permits people or orga-
nizations—in principle including intelligence agencies—to enlist the help of
strangers in solving problems, an issue this chapter turns to in the conclusion.
Crowd-sourcing requires openness, which is the antithesis of intelligence. It
also requires incentives. In the case of SARS, the public-spiritedness of health
professionals was sufficient to induce them to participate, but that is not the
case for other attempts at crowd-sourcing.
6
Th is phrase and much of the following owes to Robert Klitgaard, “Policy Analysis and
Evaluation 2.0,” unpublished paper, 2012.
116 National Intelligence and Science
R . M. Solow, “Forty Years of Social Policy and Policy Research,” Inaugural Robert Solow
7
8
See Mark Phythian, “Policing Uncertainty: Intelligence, Security and Risk,” Intelligence and
National Security 27, no. 2 (2012): 198 ff.
9
Quoted in Klitgaard, “Policy Analysis and Evaluation 2.0.”
118 National Intelligence and Science
well defined, now it is not; often it’s poorly understood. If objectives and alter-
natives used to be taken as givens, now the former are murky and the latter per-
haps incomplete. If relevant data were available and valid, and the process of data
generation was understood, none of those is any longer the case. If the context
was reduced to a standard, now it is highly variable and perhaps complex. If the
decision maker was unitary, now it is multiple, usually with no single or decisive
“decisions.” If the relationship between policy analysts and policymakers was
arm’s length, emphasizing objectivity, now it emphasizes problem clarification
and learning in both directions. Turn first to the implications of “us.” Part of this
cluster is newer for intelligence than for policy science—the greater effect of “our”
actions on the problem. The other part is new for both—the greater number of
stakeholders, if not decision makers, and the changed relations between the two.
10
I n the classic studies at the Hawthorne Works of Western Electric in 1924–1932, experi-
ments sought to see if changes in the workplace, such as more or less light, would lead to more
worker productivity. What ensued was that almost any change improved productivity but only
during the course of the experiment. The conclusion was that the attention of the initiative itself
had motivated workers. See Henry A. Landsberger, Hawthorne Revisited: Management and the
Worker, Its Critics, and Developments in Human Relations in Industry (Ithaca: New York State
School of Industrial and Labor Relations, 1958). For economics, the classic study is R.E. Lucas,
Jr., “Econometric Policy Evaluation: A Critique,” in K. Brunner and A.H. Meltzer, eds. The
Phillips Curve and Labor Markets, Carnegie-Rochester C Carnegie-Rochester Conference on Public
Policy, Vol. 1 (Amsterdam: North Holland), pp. 19–46.
Common Core Issues 119
11
For a survey, now several years old, see Gregory F. Treverton and Lisa Klautzer, Frameworks
for Domestic Intelligence and Threat Assessment: International Comparisons (Stockholm: Center
for Asymmetric Threat Studies, Swedish National Defence College, 2010).
120 National Intelligence and Science
12
This and the next paragraph draw on Todd Masse, Siobhan O’Neil, and John Rollins, Fusion
Centers: Issues and Options for Congress (Washington, DC: Congressional Research Service,
2007).
13
Ibid., p. 29.
14
Ibid., p. 26.
Common Core Issues 121
me?”15 Because of the huge number of information systems and the result-
ing duplication, analysts are inundated with floods of information of variable
quality. A review of the Los Angeles fusion center, called the Joint Regional
Intelligence Center, said this: “an overbroad intelligence report distributed by
the LA JRIC . . . offers no perceived value for police agencies and is not fre-
quently used to deploy police resources. Typically, the LA JRIC collects and
distributes open source material or newsworthy articles in an effort to inform
their clients. As a result local independent chiefs feel they cannot use the intel-
ligence to increase operational capacity or deploy police resources to combat
crime or terrorism.”16
Further, although much rhetoric is expended on the importance of a cycli-
cal information flow between the fusion centers and the federal intelligence
community, the information cycle tends to be rather one directional. From the
perspective of the fusion centers themselves, the value of information becomes
opaque once it is sent to the federal intelligence agencies.17 This lack of a feed-
back loop creates both resentment and inefficiency in the relationship between
federal entities and the fusion centers. Things have improved from the days
when local authorities reported that if they called the FBI, they never received
a call back, but there is a long way to go. In the counterterrorism realm, in par-
ticular, it is not clear in many jurisdictions how or if tips from state and local
authorities that are not deemed to rise to the level of JTTF cases get registered
in the system.
Moreover, there has been lingering concern about issues of civil liberties
and fusion centers, mainly because of the public perception that they lack
transparency in their operations. A range of reports, including one published
by the American Civil Liberties Union (ACLU) in December 2007, brought
up concerns that the expansion of intelligence gathering and sharing in these
centers threatens the privacy of American citizens.18 Some of the concerns
raised by the ACLU stem from the wide range of participants involved in the
fusion centers, including the private sector and the military. The report argues
that breaking down the many barriers between the public and private sectors,
15
See Daniel M. Stewart and Robert G. Morris, “A New Era of Policing? An Examination of
Texas Police Chiefs’ Perceptions of Homeland Security,” Criminal Justice Policy Review 20, no. 3
(2009): 290–309.
16
Phillip L. Sanchez, Increasing Information Sharing among Independent Police Departments
(Monterrey, CA: Naval Postgraduate School, 2009), pp. 3–4.
17
Th is discussion draws on Henry H. Willis, Genevieve Lester, and Gregory F. Treverton,
“Information Sharing for Infrastructure Risk Management: Barriers and Solutions,” Intelligence
and National Security 24, no. 3 (2009): 339–365.
18
See Michael German and Jay Stanley, What’s Wrong with Fusion Centers? (New York:
American Civil Liberties Union, 2007).
122 National Intelligence and Science
intelligence and law enforcement, and military and civil institutions could lead
to abuses of private and other civil liberties, particularly if a strong and clear
legal framework isn’t established to guide operations within the centers.
The fusion centers are likely to take a variety of paths in the future. Virtually
all are moving away from a singular focus on terrorism to an all-crimes, or even
all-hazards, approach—probably a happy spillover from the preoccupation
with terrorism to broader policing. In some places, like the state of Iowa, where
terrorism is a minor threat, the center explicitly is in the business of driving
intelligence-led policing. Other centers will simply fade away if and as federal
support diminishes and contributing public safety agencies decide their person-
nel are more valuable “at home” rather than seconded to a fusion center. A 2012
congressional assessment was too narrow—limiting itself to the centers’ con-
tribution to the nation’s counterterrorism effort—but still its criticism was not
off the mark. It found “that the fusion centers often produced irrelevant, use-
less or inappropriate intelligence reporting to DHS [Department of Homeland
Security], and many produced no intelligence reporting whatsoever.”19 Looking
at a year’s worth of reports, nearly 700, the investigation “could identify no
reporting which uncovered a terrorist threat, nor could it identify a contribution
such fusion center reporting made to disrupt an active terrorist plot.”
To stretch thinking about dealing with a mushrooming “us,” Table 6.1 com-
pares sharing arrangements in three different policy realms—terrorism, infec-
tious disease, and natural disasters.
For intelligence, the difference in “us”—be they consumers or partici-
pants—is qualitative; categories of people that would never have been thought
of as users of intelligence now need it. For policy analysis 2.0, in Klitgaard’s
phrase, the change is less stark but still notable: more stakeholders care about a
set of decisions and more may be organized to watch the decision process and
to try to influence it. For policy analysis, the expanded numbers merge with
the third common core concern, transparency. If it always was a fiction that
an authoritative decision maker could make a choice based on the study, it is
more and more of a fiction. Processes for reaching decisions need to be more
and more participatory and transparent even at the cost of ponderousness and
delay. In public sector decision making, process is as important as outcome,
perhaps more so.
Start with a lament common to both intelligence and policy analysis: deci-
sion makers don’t heed us. Conclusions about policy analysis abound that,
with a few changed words, could come from intelligence practitioners:
Permanent Subcommittee on Investigations, Federal Support for and Involvement in State and
Local Fusion Centers (Washington, DC, 2012), p. 2.
Common Core Issues 123
If the laments are similar, so are the frustrations in trying to learn lessons
in order to do better. One experiment in policy analysis took advice from
the famous statistician Frederick Mosteller: “People can never agree on what
benefits and costs are. But they can and do agree on specific examples of
outrageous success and outrageous failure. Find these among your projects.
Study them. Compare them. Share your results, and learn some more.” 21
20
C . H. Weiss, “The Many Meanings of Research Utilization,” Public Administration Review
39, no. 5 (1979): 426.
21
A s quoted in Klitgaard, “Policy Analysis and Evaluation 2.0,” p. 18.
124 National Intelligence and Science
22
Ibid., pp. 21–22.
Common Core Issues 125
Klitgaard’s footnote at the end of his first paragraph about relationships and
independence is also worth spelling out. It comes from Michael Quinn Patton:
23
Ibid., p. 23.
24
M . Q. Patton, “Use as a Criterion of Quality in Evaluation,” in Visions of Quality: How
Evaluators Define, Understand and Represent Program Quality: Advances in Program Evaluation,
ed. A. Benson, C. Lloyd, and D. M. Hinn (Kidlington, UK: Elsevier Science, 2001), p. 163.
126 National Intelligence and Science
to be partners in Kendall’s “big job—the carving out of the United States des-
tiny in the world as a whole.”
policy science. Again, the change is one of degree for policy science, but it
is virtually qualitative for intelligence. If the “us” has always been central to
policy, so, too, advancing technologies over the centuries have expanded the
number of people who could be engaged: notice the sweep from Gutenberg’s
press in the 1400s to the cassette recordings of Ayatollah Khomeini that were
such a part of the Iranian revolution in the 1970s. In that sense, social media
are only the latest in a long series of new technologies, and whether they will
make a qualitative change in the ease of organizing for policy changes remains
to be seen. It is clearer, though, that social media symbolize a wave that will
transform the world in which intelligence operates.
Social media, part of the larger development of the worldwide web known
as Web 2.0, are web-based services that facilitate the formation of communities
of people with mutual or complementary interests, and provide them a means
of sharing information with each other. They can put together people who do
not know each other, and thus can promote the formation of communities of
interest. In fact, the category of “social media” includes a wide range depend-
ing on how interactive they are and how much they are designed to connected
people who may not know each other. For instance, email is interactive but
normally connects people who already know each other. By contrast, websites
certainly can connect people who don’t know each other but are not typically
very interactive. Blogs vary in how interactive they are and how open to new and
unknown participants. Even today’s hottest social media, Twitter and Facebook,
are very different. The former is entirely open to new connections and, in prin-
ciple, widely interactive; sheer volume, however, may limit how interactive it is in
fact. By contrast, Facebook is designed primarily to promote interactions among
people who already have some connection to each other, however scant.
The revolutions in devices and media amount to a sea change. A few statis-
tics will drive home just how fast the information world is changing. Personal
(non-work) consumption of information by Americans grew at 2.6 percent per
year from 1980 to 2008, from 7.4 to 11.8 hours per day per person, leaving
only 1.2 hours per day per person not spent consuming information, working
(and also increasingly consuming information), or sleeping.25 To be sure, tra-
ditional media—radio and TV—still dominate US consumption, accounting
for about 60 percent of the daily hours people spend engaged. Yet computers
and other ways of accessing the Internet flood around us. 26 Twitter grew by
25
These and other statistics in this paragraph are from Roger E. Bohn and James E. Short,
How Much Information? 2009: Report on American Consumers (San Diego: Global Information
Industry Center, University of California, 2009).
26
W hile much of this volume is explained by the use of graphics, the fact that it is interactive
is consistent with the growing interest in and use of social media.
128 National Intelligence and Science
30 31.98417554
20
15.84118802
10
9.823976021
0
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014*
Note: * Estimate
1,500 percent in the three years before 2010, and it now has more than almost
a quarter billion active users. Facebook reached 500 million users in mid-2010,
a scant six years after being created in a Harvard dorm room.
Mobile, smartphones will be the norm for communications and sharing. As
Figure 6.1 suggests, mobile phones already overwhelm traditional fixed phone
lines, having grown in a decade from half a billion to almost five billion users.
The world’s population was estimated in mid-2009 to be 6.8 billion, 27 percent
of whom were under the age of 14. Thus, on average, there was one mobile phone
for everyone over the age of 14. Of course, mobile phone ownership is not evenly
distributed worldwide, but the trend toward ubiquitous ownership is obvious.
Even more portentous than the explosive growth in mobile phones is the
revolution in their capabilities. The iPhone and other “smart” handsets let
users gain access to the Internet and download mobile applications, includ-
ing games, social networking programs, productivity tools, and more. These
same devices also allow users to upload information to network-based ser-
vices, for the purposes of communication and sharing. Smartphones, repre-
senting a manifold increase in processing power from ordinary mobile phones
(“cell” phones or “feature” phones), accounted for over a third of the handsets
shipped in North American in 2009, and some analysts estimate that by 2015
almost all handsets will be smart.27 Mobile operators have started building
Here, too, precise language is elusive. “Smartphone” seems the phrase of choice but is pretty
27
tepid given the capabilities of the devices. PDA, or personal digital assistant, already rings out
of date, while “hand-held computer” sounds clunky. On the revolution, see “The Apparatgeist
Calls,” The Economist, December 30, 2009.
Common Core Issues 129
Total Mobile Data Traffic (Teta byte per month) Total 131%
2,000,000
19%
Data 112%
10%
1,500,000
P2P 101%
1,000,000
64%
Video 154%
500,000
Voice 112%
7%
0
2008 2009E 2010E 2011E 2012E 2013E
networks that will allow for faster connection speeds for an even wider variety
of applications and services.
The other striking feature of social media’s future is that it will be domi-
nated by images, not words. That is plain with Facebook but is increasingly the
case for Twitter as well. As Figure 6.2 indicates, video is driving the growth of
the mobile Internet.
Reading, which had been in decline due to the growth of television, tripled
from 1980 to 2008, because it was overwhelmingly the preferred, indeed nec-
essary way to receive the content on the web, which was words. That will be
less true in the future as more of the content of the worldwide web is images,
not words. Making use of a traditional computer required literacy; the same is
not true of a smartphone. Apparently, those images can be powerful in shaping
public opinion, and perhaps policy outcomes, both domestically and interna-
tionally even if, still, most of the posited effects are anecdotal. Consider the
effects of the pictures from Abu Graibh, or the YouTube video of a US heli-
copter strike apparently killing civilians in Baghdad in 2007. 28 Compare those
with the leak through WikiLeaks of 92,000 US military documents pertaining
to Afghanistan.29 If the latter seemed to have less impact, that probably was
mostly because the main storylines—collateral damage and Pakistani com-
plicity with the Taliban—were familiar (the former in part because of previous
images). But the documents were just that, words, and thus a quintessential
newspaper story, not images that might go viral on the web. Furthermore, both
28
The video is available at http://www.youtube.com/verify_age?next_url=http
percent3A//www.youtube.com/watch percent3Fv percent3Dis9sxRfU–ik.
29
For excerpts and analysis, see the New York Times, July 26, 2010.
130 National Intelligence and Science
the WikiLeaks warlogs and the US State Department cables contained simply
far too many words, an indigestible information mass demanding extensive
decoding and contextualization to become useful. Transparency here created
the same effect of massive information overload as intelligence had experi-
enced from the 1960s onward due to the focus on collection (see Chapter 3).
Perhaps because of the newness of social networking, it is not clear how
important these media will be in organizing, for instance, anti-government
groups. In Iran and other countries, they have been the latest information
technology (IT) innovation in organizing in a line that runs from Ayatollah
Khomeini’s audiocassette speeches smuggled into Iran in the 1970s through
the fax machines that played a role in Eastern Europe’s transition later. In crisis
periods, like the one after the Iranian elections, social media had the attrac-
tions of speed and relatively anonymity that offered protection from govern-
ment retaliation. But they also carried the challenge of validating who was real
and who was not (and who was a government agent).
For policy, social media thus far are yet another means for governments to
communicate with their constituents and for those constituents to organize to
oppose or influence their governments. The impact of social media on commerce
is visible and striking. On the political and social side, as opposed to the com-
mercial, it remains unclear whether the uses of social media are in fact effective at
increasing participation, increasing knowledge, holding government to account,
or overcoming collective action problems. In large part this uncertainty results
from the empirically tenuous tie between public opinion and policy, and concerns
about selection bias—that is, various types of political participation are generally
understood to arise from a broad range of individual-based influences, for exam-
ple, gender, age, income, education, and so forth. The extrapolation is that online
activity is undertaken predominantly by those already politically engaged offline.
So far, the use of new media by governments, what has come to be called
e-governance, is oriented largely around soliciting information from citizens.
New technology platforms allow this communication to occur in real time and
with low transaction costs—complaints are automatically sent to the proper
department without use of a phone operator, and stakeholder opinions about
policy can be received without the inefficiencies of large meetings. Such ini-
tiatives as the Peer-to-Patent program, in which an “open review network of
external, self-selected, volunteer scientists and technologists provide expertise
to patent examiners on whether a patent application represents a truly new and
non-obvious invention,” are suggestive of the promise of e-governance uses of
social media. 30
M ichael Lennon and Gary Berg-Cross, “Toward a High Performing Open Government,”
30
However, for policy the process may be slower than expected. Entrenched
bureaucracies struggle to adapt to this new type and pace of interaction with
the public. Managers will need to review and in many cases to change prac-
tices and policies that served tolerably well within a hierarchical, agency-as-
provider-but-not-recipient model, and proper incentives to encourage such
collaboration will need to be put in place. 31 Optimism about the benefits of
citizen collaboration in the regulatory and policymaking processes enabled
by open government should also be tempered by acknowledging that more
participation may introduce costs. Many observers, for example, fear that the
new world of electronic mass submissions will overwhelm and delay agencies
with limited resources. While concrete examples do not exist, the concerns
extend to the prospect that e-government might introduce more, and more
intractable, conflicts of interest into the process that can slow rather than
speed deliberation. 32 Moreover, the fact that e-government and e-government
infrastructures are web-based may mean that the conditions reflected are
those only of the populations that have ready access to broadband. Indeed,
a 2010 Pew study of the United States found that it is the white, affluent, and
well-educated that are most likely to access government information, to use
government services, and to participate in collaborative enterprises online. 33
All these considerations reinforce the proposition that for policy, the impli-
cations of transparency and new technology will be more evolutionary than
revolutionary. Those technologies will make it easier for citizens to question
their government, and perhaps to oppose or even seek to overthrow it. For pol-
icy analysis, those technologies will facilitate what transparency is requiring—
what Klitgaard calls convening. Convening is a long way from “the study for the
decision.” In Klitgaard’s words,
31
“Defining Gov 2.0 and Open Government,” Alex Howard, January 5, 2011. Available, as
of July 22, 2014, at http://gov20.govfresh.com/social-media-fastfwd-defining-gov-2-0-and-o
pen-government-in-2011/.
32
See S. W. Shulman, The Internet Still Might (but Probably Won’t) Change
Everything: Stakeholder Views on the Future of Electronic Rulemaking (Pittsburgh: University of
Pittsburgh, University Center for Social and Urban Research, 2004).
33
Aaron Whitman Smith, Government Online: The Internet Gives Citizens New Paths to Government
Services and Information (Washington, D.C.: Pew Internet & American Life Project, 2010).
132 National Intelligence and Science
often affects the outcomes of what other parties do. They are not fully
aware of each other’s objectives, capabilities, or information sets; they
do not fully understand their strategic interrelations. It may also be
the case that no one can understand all those things, in the sense that
the stakeholders (along with the environment around them) form a
complex system. 34
To be sure, these estimates should be treated with caution, for they tend to be extrapola-
35
Example: Example:
External targeting for Twitter,
Direction of sharing
various purposes Facebook
Example: Examples:
intra- Intellipedia,
Internal
office A-Space,
blog eChirp
Known Unknown
Familiarity with participants
Figure 6.3 Four Kinds of Interaction through Social Media. Source: This figure is
drawn from Mark Drapeau and Linton, Social Software and National Security: An Initial Net
Assessment (Washington, DC: Center for Technology and National Security Policy, National
Defense University, 2009), p. 6.
36
See http://twitter.com/greyballoons.; http://intelfusion.net/wordpress/2010/01/12/
the–grey–balloons–project–faq/.
37
R obert A. Flores and Joe Markowitz, Social Software—Alternative Publication @ NGA
(Harper’s Ferry VA: Pherson Associates, 2009), p. 3.
7
A Postmortem on Postmortems
The following is a list of selected “intelligence failures” by United States intel-
ligence over the last half century:
1940s US intelligence predicts that the Soviet Union is five to ten years
away from developing a nuclear weapon. The Soviets detonate a test
weapon the following year (1948–1949).
1950s Intelligence reports warn of a Soviet lead over the United States in
missiles and bombers. The first US spy satellites put in orbit, beginning
in 1960, find no such disparities.
1960s An intelligence estimate says that Soviets are unlikely to position
nuclear weapons in Cuba. CIA Director John McCone disagrees and
136
Challenges for Intelligence 137
orders more surveillance flights, which soon find signs of missile deploy-
ment. Cuban leader Fidel Castro is forced to remove the missiles after
President Kennedy orders a US naval blockade of the island (1962).
1970s Persistent shortfalls in estimates of Soviet military capability and
expenditure spark “Team B” [insert table note] challenge to the CIA.
1980s US intelligence fails to predict the impending collapse of the Soviet
Union.
1990s United Nations (UN) inspectors discover an Iraqi nuclear program
that was much more extensive than the CIA had estimated (1991).
India and Pakistan conduct nuclear tests. This testing was not predicted
by the CIA (1998).
U.S. warplanes accidentally bomb the Chinese Embassy in Belgrade as
a result of erroneous target information provided by the CIA. Three
Chinese journalists are killed (1999).
Significant overestimate of the foreign consequences of Y2K (the Millenium
Bug) issues (1999).
2000s The CIA fails to forecast 9/11 attacks. It tracks suspected al Qaeda
members in Malaysia months before but fails to place Khalid Al-Midhar
(one of the 9/11 hijackers) on its terrorist “watch list” (2001).
Iraqi WMD estimate took 20 months to develop and was dead wrong in
its assessment of Iraqi WMD.
From the list, it’s no wonder that intelligence analysts worry what value they
add to the making of policy. Indeed, the question arises, Is intelligence ever
right? Yet the better question is, What constitutes “failure” and to what extent
can it be avoided? Organizations must learn if they are to compete. If organiza-
tions fail to learn, they risk not keeping up with a changing world, all the more
so when adversaries are continually trying to frustrate the efforts of US intel-
ligence. But how should they learn, still more to the point, how can they become
a learning organization? Those are the questions at issue in this section.
The first half-decade of the 2000s were the season for postmortems of failed
intelligence analyses (as opposed to intelligence operations, which typically
were the focus of post-failure assessments in earlier decades). Several of those
were detailed and thoughtful.1 Some of them have been embodied in reform
1
For the United States, the two most detailed are those of the 9/11 Commission and the WMD
Commission. Formally, they are, respectively, the Learning from Recent Best Practices Upon
the United States, National Commission on Terrorist Attacks upon the United States, The 9/11
Commission Report: Final Report of the National Commission on Terrorist Attacks Upon the United
States, and L. H. Silberman and C. S. Robb, “The Commission on the Intelligence Capabilities
of the United States Regarding Weapons of Mass Destruction: Report to the President of the
United States” (Washington, DC, 2005). The British postmortem, the Butler Commission, is
Butler Commission, Review of Intelligence on Weapons of Mass Destruction (London, 2004).
138 National Intelligence and Science
initiatives in the United States and other countries—for instance, the 2004
US intelligence reform legislation creating the director of national intelligence
(DNI).2 Most of that is all to the good. Nevertheless, as the legal saying has it,
hard cases make bad law. 3 The social science equivalent is that single cases—or
a linked series of failures in the 9/11 instance—make idiosyncratic lessons;
it is all too tempting to conclude that if analysts did x and failed, then they
should do non-x (or anti-x) in order to succeed. All of these postmortems tend
to carry the presumption that intelligence analysis is a singular enterprise. Yet
it is not. It comprises a variety of purposes and relations to consumers, and
thus of data and methods. Intelligence analysis is plural, and so must best prac-
tices be plural.
Yet it is far from clear that learning lessons by examining celebrated cases
is the best path. The examinations tend to be rare and done in the full glare of
publicity—and of political stakes. They often tend to focus on finding guilty
villains to blame, as much as improving practice. Even if they do not focus on
who shot—or who missed—John? they still are methodologically flawed. They
focus on a handful of incidents, each with its own peculiarities—but whose les-
sons are then generalized to inevitably different circumstances. They do tend to
assume that if analysts did x and failed, then doing do non-x (or anti-x) would
have produced success, and would do so in future circumstances akin to those
examined. By this point, the enquiry is on very thin epistemological ice indeed.
They also focus on failures. In addition to demoralizing analysts, that has
several consequences. Most obviously, it raises the defenses of those intelli-
gence organizations that feel their copybooks are being graded by the exercise.
It is perhaps little surprise, for example, that the first efforts by the US director
of national intelligence to create a joint lessons-learned center ran into resis-
tance from virtually all the analytic agencies. Focusing on failures also down-
plays what might be learned from successes, or even middling outcomes. That
preoccupation with major error may also produce a pendulum swing. One
among several reasons that US intelligence overestimated Iraq’s WMD pro-
grams in 2002 was that it had underestimated them in 1990—and been taken
to task for doing so.
Formally, United States Congress, Intelligence Reform and Terrorism Prevention Act of 2004
2
Perhaps most important, the assessments are seldom very explicit about
what constitutes intelligence failure.4 Not every failure is an intelligence failure;
in principle, there could be intelligence successes that were followed or accom-
panied by policy failures. A dramatic example is the US National Intelligence
Estimate on Yugoslavia in the autumn of 1990, which predicted Yugoslavia’s
tragedy with a prescience that is awe-inspiring. 5 It concluded that Yugoslavia’s
breakup was inevitable. The breakup would be violent, and the conflict might
expand to spill into adjacent regions. Yet the estimate had no impact on policy
whatsoever. None. Senior policymakers didn’t believe it, or were distracted by
the impending collapse of the Soviet Union, or didn’t believe they could do
anything about it. To the extent that policy officers saw and digested the esti-
mate, intelligence could not be said to have failed. To qualify as an intelligence
failure, flawed intelligence analysis has to be seen and acted on by policymak-
ers, leading to a failure. There has to be a decent case that better intelligence
would have induced policy officials to take another course, one that was likely
to have led to a more successful policy. By that definition, intelligence on Iraqi
WMD in 2002 surely was flawed but may not qualify as an intelligence failure
to the extent that a better estimate, within the bounds of what was possible,
probably would not have changed the policy outcome.
In these circumstances, most of the quick lessons from the recent postmor-
tems are apt. But they tend to be relatively superficial, reached by wise people
who usually are amateurs in the esoterica of the trade (because the experts are
likely to be seen as biased, even responsible for the failures being autopsied).
They are in the nature of reminders that analysts might tape to their comput-
ers, less lessons than good guidance that is too easily forgotten. After the fall
of the Shah in Iran, both intelligence and policy in the United States reflected
the big lesson from the case—don’t assume the Shah understands his politics
any better than US intelligence does—and both applied that conclusion to the
next similar case at hand, the fall of Ferdinand Marcos in the Philippines. In
that event, the lesson produced a success.
By the same token, intelligence took on board the headline from the Indian
nuclear test postmortem—all politics may be local but no less important for
it, so take seriously what politicians actually say they will do. That lesson may
not have been applied forcefully enough to Osama bin Laden, but intelligence
analysts were not likely in any case to repeat the mirror-imaging of the Indian
4
A mong many good discussions of this issue, see Stephen Marrin, “Preventing Intelligence
Failures by Learning from the Past,” International Journal of Intelligence and CounterIntelligence
17, no. 4 (2004): 657 ff.
5
The estimate has been declassified. See National Intelligence Council, Yugoslavia
Transformed, National Intelligence Estimate (15–90) (1990).
140 National Intelligence and Science
case. Bin Laden presented a different challenge: he was too different even to
mirror image; we couldn’t imagine how we might act if we were in his shoes.
The headlines from the Iraq WMD case, too, were reminders mostly about
good tradecraft, even good social science: validate sources as much as possible,
and do contrarian analysis (what’s the best case that Saddam has no WMD).
The report on the biggest of all recent failures, 9/11, is wonderful history with
a strong lesson about the importance of sharing and integrating information
across US intelligence organizations, not sequestering or cosseting it. Yet even
that postmortem cannot escape a certain historical determinism to which case
histories are vulnerable: everyone knows how the story ended, and, knowing
that, the pointers along the way are painfully obvious. It is easy to underesti-
mate the noise in the data or even, in the 9/11 instance, the good reasons why
the CIA and FBI didn’t share information very freely or why the FBI didn’t go
knocking on flight school doors.
Moreover, however valuable these reminders are, they fall short of best
practice in learning lessons for intelligence. Warfare, where lessons-learned
activities are becoming commonplace, tends to be an episodic activity. By con-
trast, intelligence, especially in an era of non-state as opposed to state-centric
threats, is more continuous. At any one time, it provides estimates of what
exists (e.g., how many nuclear weapons does North Korea have?), what will
be (e.g., is India planning to test a nuclear weapon?), and what might be (e.g.,
how would Iran react to the overthrow of Iraq’s government?). Intelligence
exists to give policymakers reasoned assessments about parameters whose
truth-value is not otherwise obvious. Much of Cold War intelligence was about
what exists—puzzle solving, looking for additional pieces to fill out a mosaic
of understanding whose broad shape was a given. By contrast, intelligence and
policy have been engaged at least since 9/11 in a joint and continuing process
of trying to understand the terrorist target, in the absence of handy frames of
reference.
At any one point in time intelligence agencies are assessing an array of enu-
merable possibilities, each of which can be assigned a likelihood. A national
intelligence estimate is a consolidated likelihood estimate of selected param-
eters. Such estimates are not (or at least should not be) static. To be valuable,
each must be open to adjustment in the face of new information or reconsid-
eration. Sometimes, the new information fixes the estimate firmly (e.g., India
tested a nuclear weapon). The 2007 US estimate on Iran’s nuclear program,
discussed in more detail later, began as good housekeeping, an updating of the
community’s 2005 conclusion. Before the estimate was completed, though,
new information seemed to suggest pretty conclusively that Iran in 2003 had sus-
pended its effort to make weapons while continuing to enrich fissile fuel. More
commonly, each event can influence the confidence with which an estimate is
Challenges for Intelligence 141
held. The analyst’s art, and one where the ability to learn lessons is valuable, is
in collecting the right facts, developing or choosing the right rule for integrating
these facts, and generating the right conclusions from the combination of facts
and rules.
This art can be considered a process, and the goal of a lessons-learned capabil-
ity is to improve the process continually. Toyota, the car company famous for its
success implementing quality control in manufacturing, calls the process of con-
tinuous improvement, kaizen, which is critical to its success.6
How does such a process work? Consider, for instance, an estimate of the like-
lihood that Iraq had a serious nuclear weapons program. New evidence was then
found of Iraqi commerce in aluminum tubes. This evidence had to be interpreted,
and once interpreted it should have affected the judgment of whether Iraq had a
serious weapons program. Conversely, if another week went by in which weap-
ons inspectors again failed to find a serious weapons program in Iraq, that event
should have reduced the belief that such a program exists (by how much is another
question). Perhaps needless to add, success at finding the information that would
make the greatest potential difference in these estimates is a critical measure of
success for the Intelligence Community, but only because it feeds these estimates
There are many formal, mathematical approaches to making inferences.7
However, they are not panaceas and tend to be more useful in more complex
problems involving great uncertainty and multiple variables (e.g., estimates,
forecasts, and warning) rather than those involving interpretive reporting of
events under way. Human judgment is and will remain the core of the analyst’s
6
Popularized in Masaaki Imai, Kaizen: The Key to Japan’s Competitive Success
(New York: McGraw–Hill, 1986). The Japanese concept of kaizen is directly derived from the
principles of designed-in (versus fixed-after-production) quality, customer-centricism, and con-
tinuous improvement brought to Japan after World War II by Edwards Deming, an American
who worked with the Japanese to recover their industrial capacity. His work did not get trac-
tion in the United States until kaizen was promoted here in the 1980s as a Japanese management
practice known as Total Quality Management. Deming’s approach was to examine and improve
the system in which operations take place and not simply to reorganize structure or blame the
person. A modern incarnation of this thinking is in Six Sigma programs to reduce product flaws
as applied at Motorola and General Electric. For example, see http://www.deming.org/regard-
ing Deming’s work and some current resources, and http://www.isixsigma.com/me/six_sigma/
regarding Six Sigma methods. It is also quite relevant to service and intellectual processes, like
intelligence, in its reliance on three concepts: products alone are not as important as the pro-
cess that creates them; the interdependent nature of systems is more important than any one
isolated problem, cause, or point solution; and non-judgmental regard for people is essential in
that people usually execute as the system directs or allows, people are not sufficient explanation
for a problem (or a success), and blaming violates an associated principle against waste.
7
For instance, Dempster-Shafer theory is a generalization of Bayesian logic that allows ana-
lysts to derive degrees of belief in one question from probabilities for a related question—for
instance, information provided by an observer or informant whose reliability is subjectively
assessed. For a description, see http://www.glennshafer.com/assets/downloads/article48.pdf.
142 National Intelligence and Science
art. Yet there surely is analytic value in at least being explicit about the inputs
to an assessment and how they were treated. At a minimum, explicitness
permits the validity of such assessments to be scrutinized before third par-
ties. Intelligence analysts should compare their estimates over time. How, for
instance, do last month’s events alter my estimate about the likelihood of any
particular outcome in North Korea?
Having noted such deliberations, it is easier to discover why the processes
succeeded or went awry. If they went awry, where? Was it the failure to collect
evidence and if so, what sort? Was it the misleading template that was used to
process the evidence and, if so, in what way? What unexamined assumptions
were made in generating the estimate?8 Was it the failure to integrate the evi-
dence into the estimate properly, and if so, what kind of failure?
The process might start by making explicit the following issues:
See, for instance, James Dewar et al., Assumption-Based Planning: A Planning Tool for Very
8
9
“Secrets Not Worth Keeping: The Courts and Classified Information,” Washington Post,
February 15, 1989, A25.
144 National Intelligence and Science
the Soviet empire’s disintegration. Robert Gates, then defense secretary and a
wizened Washingtonian, offered a nice antidote to some of the sky-is-falling
commentary: “The fact is, governments deal with the United States because it’s
in their interest, not because they like us, not because they trust us, and not
because they believe we can keep secrets. Many governments—some govern-
ments deal with us because they fear us, some because they respect us, most
because they need us. We are still essentially, as has been said before, the indis-
pensable nation. So other nations will continue to deal with us. They will con-
tinue to work with us. We will continue to share sensitive information with one
another. Is this embarrassing? Yes. Is it awkward? Yes. Consequences for US
foreign policy? I think fairly modest.”10
Still, increasing transparency—either because secret documents do leak or
because senior officials want them to—is part of intelligence’s challenges for
the future. That challenge is graphically illustrated by the 2007 US National
Intelligence Estimate on Iran’s nuclear weapons program. That episode was
deeply ironic: while those who produced the estimate were and remain proud
of the tradecraft that went into it, they were also trapped by their own rules.
And the furor over the public release of its Key Judgments left policy officials
feeling blindsided. As President George W. Bush himself put it, “The NIE had
a big impact—and not a good one.”11 The episode is a cautionary one for a
future in which citizens will expect more transparency about why decisions
were made, including their basis in intelligence, and leaders of government
will be more tempted to use intelligence to justify their decisions.
“We judge with high confidence that in fall 2003, Tehran halted its nuclear
weapons program.”12 So declared the first clause of the Key Judgments of
the November 2007 US National Intelligence Estimate on Iran’s Nuclear
Intentions and Capabilities. Done by the National Intelligence Council
(NIC), those Key Judgments, or “KJs” in intelligence speak, were declassified
and released in December, provoking a firestorm of controversy. The clause
seemed to undercut not only any argument for military action against Iran
but also the international campaign for sanctions against the country that
the Bush administration had been driving. President George W. Bush called
the opening “eye-popping,” all the more so because it came “despite the fact
that Iran was testing missiles that could be used as a delivery system and had
announced its resumption of uranium enrichment.”13
10
At a Pentagon press briefing, as quoted by Elisabeth Bumiller, “Gates on Leaks, Wiki and
Otherwise,” New York Times, November 30, 2010.
11
George W. Bush, Decision Points (New York: Crow Publishers, 2011), p. 419.
12
A ll the quotes from the Key Judgments in this case are from National Intelligence Council,
Iran: Nuclear Intentions and Capabilities.
13
Bush, Decision Points, p. 418.
Challenges for Intelligence 145
The second clause of the KJs added the companion judgment: “we also
assess with moderate-to-high confidence that Tehran at a minimum is keep-
ing open the option to develop nuclear weapons,” and a footnote to that first
sentence sought to clarify that “for the purposes of this Estimate, by ‘nuclear
weapons program’ we mean Iran’s nuclear weapon design and weaponization
work and covert uranium conversion-related and uranium enrichment-related
work; we do not mean Iran’s declared civil work related to uranium conversion
and enrichment.”
Yet both second clause and footnote were lost in the subsequent furor. It
was precisely that footnoted “civil” nuclear program that was the target of the
administration’s campaign lest Iran take itself to the brink of a nuclear weap-
ons capacity through purportedly “peaceful” enrichment programs. A UN
Security Council resolution in June 2006 had demanded that Iran stop its
enrichment activities, and in December another resolution imposed sanctions
on Iran. Any halt in Iran’s “nuclear weapons program” did little to ease the
policy concerns about the possible military implications of its civilian nuclear
program, especially its efforts to enrich uranium. The shouting over that
eye-popping first clause drowned out the more nuanced findings contained in
the balance of the estimate.
The controversy over the estimate was rife with ironies. For those taken
aback by that first clause, it was ironic that the estimate attributed the Iranian
decision to halt its nuclear weapons program precisely to the international
pressure that the conclusion seemed to undercut: “Our assessment that Iran
halted the program in 2003 primarily in response to international pressure
indicates Tehran’s decisions are guided by a cost-benefit approach rather than
a rush to a weapon irrespective of the political, economic, and military costs.”
While the drafters intended that conclusion as positive—diplomacy works—
that is not the spin the story acquired.
For the National Intelligence Council and the intelligence community, the
immediate irony of the controversy was that the estimate, and the key judg-
ments, were meticulous in many respects, and NIC leaders had worked hard to
improve both the process and product of NIEs after the disaster of the October
2002 estimate about Saddam Hussein’s weapons of mass destruction. The Key
Judgments reproduced a text box from the estimate that carefully explained
what the NIC meant by such words of subjective probability as “likely” or
“probably” or “almost certainly,” as well as judgments like “high confidence.”
Beyond more clarity in language, the NIC had also sought more rigor in the
process, especially by requiring formal reviews by the major collectors of the
sources included in the estimate. And, the furor notwithstanding, the primary
findings of the 2007 NIE were neither retracted nor superseded, and were in
fact reiterated by senior intelligence officials, including the director of national
146 National Intelligence and Science
intelligence, many times through early 2012. Some new information, and new
scrubs of older information tended to confirm the judgment.
In retrospect, the root of the furor was the presumption of secrecy.
The director of national intelligence, Michael McConnell, had prom-
ised Congress the estimate by the end of November, and the National
Intelligence Board (NIB)—the heads of the various intelligence agencies—
met on November 27. The meeting began with an explicit decision not to
declassify and release either the estimate or its KJs. An October memoran-
dum from DNI McConnell had set as policy that KJs should not be declassi-
fied, and he had made that point in speaking to journalists two weeks before
the NIB meeting.14 Thus, the meeting proceeded on the assumption that
what was being reviewed was not a public document but rather a classified
one intended for senior policymakers who understood the issues well. On
the whole, the NIB regarded the draft estimate as reconfirming previous
estimates—with one very significant exception. That exception was the halt
in the weaponization program in 2003. The board felt that judgment was so
important that it should be the lead sentence, followed immediately by the
companion judgment that, at a minimum, Iran was keeping its options open
to develop nuclear weapons. Calling attention to a changed assessment
was also consistent with the new requirements spelled out in Intelligence
Community Directive 203: Analytic Standards.15
The president was briefed on the approved estimate November 28, 2007
and it was delivered to the executive branch and to Congress on Saturday,
December 1. Critically, notwithstanding McConnell’s October policy and the
NIB decision, the president decided over the weekend to declassify the Key
Judgments. Two lines of argument drove that decision. One could be sum-
marized, in Fingar’s words, as “because it is the right thing to do.”16 Because
the United States had for years used intelligence assessments in seeking to per-
suade other nations to act to prevent Iran from getting the bomb, it had some
14
Th is account of the meeting is based on Thomas Fingar, Reducing Uncertainty (Stanford
CA: Stanford University Press, 2011), p. 120. For the McConnell memorandum see Michael
McConnell, “Memorandum: Guidance on Declassification of National Intelligence Estimate
Key Judgments,” Director of National Intelligence, 2007. For McConnell’s quote to the press, see
Harris, cited above. McConnell’s quote to the press, see Shane Harris, “The Other About-Face on
Iran, National Journal, December 14, 2007, accessed February 21, 2013, http://shaneharris.com/
magazinestories/other-about-face-on-iran/
15
Th at directive, effective 21 June 2007, is the following from the same author: Michael
McConnell, “Intelligence Community Directive Number 203” (Washington DC: Office of the
Director of National Intelligence, 2007).
16
Fingar, cited above, p. 121.
Challenges for Intelligence 147
responsibility to tell others that it had changed its assessment about one key
part of Iran’s nuclear program.
The other argument was less high-minded and more low-down Washington.
In the president’s words: “As much as I disliked the idea, I decided to declas-
sify the key findings so that we could shape the news stories with the facts.”17
Or as Vice President Cheney put it to Politico on December 5: “There was a
general belief—that we all shared—that it was important to put it out, that
it was not likely to stay classified for long, anyway. Everything leaks.”18 From
the perspective of Stephen Hadley, the president’s national security advisor,
the “2005 NIE and its conclusions were on the public record. Even if the new
estimate didn’t immediately leak, members of Congress were bound to com-
pare it with the 2005 version, provoking charges that the administration was
‘withholding information.’ ”19 The declassified KJs were released on Monday,
December 3, 2007.
In the ensuing public debate, the first clause of the KJs dominated every-
thing else, and people tailored it to fit their particular cloth. Iran’s president,
Mahmoud Ahmadinejad, was jubilant and immediately called the NIE a “great
victory” for his country.20 President Bush noted that momentum for fresh
sanctions faded among the Europeans, Russians, and Chinese, and he quoted
New York Times journalist David Sanger about the paradox of the estimate:
I don’t know why the NIE was written the way it was. I wonder if
the intelligence community was trying so hard to avoid repeating
17
Bush, Decision Points, pg. 419.
18
A s quoted in Harris, cited above.
19
I nterview by author (Treverton), Stephen Hadley (2012).
20
A s quoted in “As the Enrichment Machines Spin On,” The Economist (2008), http://www.
economist.com/node/10601584.
21
Bush, Decision Points, p. 419. The Sanger quote is also reproduced in David E. Sanger, The
Inheritance: The World Obama Confronts and the Challenges to American Power (New York: Random
House, 2010), p. 24.
22
See Robert S. Litwak, “Living with Ambiguity: Nuclear Deals with Iran and North Korea,”
Survival 50, no. 1 (2008), 91–118.
148 National Intelligence and Science
its mistake on Iraq that it had underestimated the threat from Iran.
I certainly hoped intelligence analysts weren’t trying to influence
policy. Whatever the explanation, the NIE had a big impact—and
not a good one. 23
For Hadley, the outcome was a kind of Greek tragedy. From his perspective,
while the NIC had been indicating that it might change its 2005 view on the
weaponization program, that conclusion was finalized only a week before the
estimate’s release.24 From that point on, it was indeed
a Greek tragedy, one that couldn’t be avoided. The document was not
written to be public. So Mike [McConnell] comes in with the esti-
mate and the change of view from 2005. He says this can’t be made
public. But the problem was that the 2005 conclusion was on the pub-
lic record, so when the estimate went to the Hill, there were bound to
cries that the administration was withholding evidence, that it was
again trying to manipulate public opinion. So the key judgments have
to be made public. Mike takes the document away and comes back
with very minor changes, the proverbial “happy changed to glad,”
because the NIE was approved as written. Then it comes to me. I’m
caught. I can’t rewrite it because then Congress would compare the
public version with the classified one, and the manipulation charge
would be raised again. But if the KJs had been written from the
beginning as a public document, they would have been written very
differently.25
Surely, the presumption of secrecy was the root of trouble in the case. In
retrospect it is hard to understand that presumption. After all, Congress had
requested the estimate in the first place. President Bush decided to release the
KJs because the NIC’s 2005 findings were on the record, and so simple hon-
esty argued for releasing the new view. Moreover, the new finding was bound
to be compared to the old, raising charges of manipulating information. Still,
the estimate had been prepared under the DNI’s decision that none of it—or
any future NIEs—would be made public, and good tradecraft dictated that the
Key Judgments parallel the estimate as closely as possible. They would have
been written very differently for a public audience not steeped in the substance
Stephen Hadley, “Press Briefing by National Security Advisor Stephen Hadley,” (December
24
but with axes to grind. The hasty release also preempted a careful diplomatic
and intelligence roll-out of the new assessment.
With the benefit of hindsight, the scope note reproduced in the KJs should have
opened with a flashing light warning of what the estimate was not about—Iran’s
“civil” nuclear programs that had earned it censure from the International Atomic
Energy Agency and sanctions from the United Nations. Instead, while the note
detailed the time frame and questions for its subject—Iran’s nuclear weaponiza-
tion and related enrichment—it left itself open to a broader interpretation by
advertising itself as an “assessment of Iranian nuclear intentions and capabilities.”
Lots of NIEs have been produced since and remained secret. Yet in the crunch,
with several precedents, it will be hard for administrations to resist releasing parts
of NIEs, all the harder because NIEs, as assessments agreed to by all the intelli-
gence agencies, acquire a document of record quality. And sometimes, as with the
October 2002 NIE on weapons of mass destruction in Iraq, it will be administra-
tions that take the lead in getting “secret” intelligence assessments out in public.
Increasing transparency is changing the relationship between intelligence
and policy, perhaps most visibly in the United States. Here intelligence rightly
cherishes its independence from administrations in power, and there is a need
for some consistent channel—perhaps between the DNI or NIC chair and the
national security advisor—to give administrations warning of what subjects are
being assessed and what assessments are emerging. In this instance, process was
the enemy of warning; whatever advance notice policy officials might have had
that the NIC view would change wasn’t really actionable until the estimate was
formally approved.
26
The entirely unclassified version of this RAND study, cited in Chapter 5, is Gregory
F. Treverton, New Tools for Collaboration: The Experience of the U.S. Intelligence Community (IBM
Business of Government, forthcoming).
150 National Intelligence and Science
own things, creating their own tools rather than cooperating in creating col-
laborative tools. There has not been a strategic view from the perspective of the
community enterprise, still less attention by agency seniors, to what kinds of
incentives to provide for what kinds of collaboration.
Broadly, the recent history of collaborative tools in the US intelligence
community might be divided into two phases: the first, beginning in the
2005 timeframe, might, with some exaggeration for emphasis, be character-
ized as dominated by tools, with the second, more recent phase dominated by
mission. In the first phase, with all the “cool stuff” coming out of the private
sector, intelligence was inspired to build its own counterparts. A-Space is
perhaps the clearest example. The implicit assumption was that if the tool
was interesting enough, people would use it. A-Space’s early managers made
that assumption explicit by trying to design not necessarily a destination
but at least a way station that officers would want to visit en route to some-
where else. Many of the early adopters hoped for a revolution in the tradi-
tional analytic process and even in the way intelligence was disseminated to
consumers.
When the revolution did not dawn, some frustration beset the enthusiasts.
The goals of the second phase have been more modest. The National Security
Agency’s NSA’s Tapioca (now available on Intelink, an inter-agency system)
is perhaps the best example. Its creator sought a virtual counterpart to the
physical courtyard that Pixar had constructed at its headquarters—a natural,
indeed almost inevitable meeting place as employees went about their daily
business. His goal, thus, was “unplanned collaboration,” and his animating
question from the beginning was “what do I need to do my job here at NSA bet-
ter?” To that end, where NSA already had tools for particular functions—as it
did with Searchlight in looking for expertise—he brought in and embellished
the tool. Others spoke of weaving the tools into the fabric of the workplace, not
thinking of them as interesting add-ons.
Especially in these circumstances, the label “social media” is not helpful
because, as Chapter 6 suggested, the tools are very different. Popular usage
tends to lump Twitter and Facebook together when in fact they are very dif-
ferent: the first is open in principle to anyone (who registers) but the second
is primarily a means of keeping up with people already known to the user. In
that sense, they differ in how “social” they are. Figure 7.1 displays the range of
collaborative social media tools in the US intelligence community, both those
within agencies and those across them. The within-agency tools are carried on
each agency’s classified web, and in general they are not accessible to officials
from other agencies, even if those officials have the requisite security clear-
ances. The inter-agency tools are carried on what is called “Intelink.” Intelink
operates at all three levels of classification in the US system—at the SCI, or
Challenges for Intelligence 151
e-mail
Connecting
Jabber short messages Giggleloop
sharing images
IM/chat
puter networks operating at the Secret Compartmented Intelligence (SCI) level. SIPRNet is
Secret Internet Protocol Router Network, a secret-level network widely used by the US military.
152 National Intelligence and Science
has been practiced. Intelligence has been closed and passive; social media are
open and active. Intelligence agencies, like most of government, are hierarchi-
cal; social media are subversive of hierarchy. Thus, for external purposes, the
more open the media, the better for intelligence, for openness creates opportu-
nities both to collect information and target collection operations. However,
when intelligence creates versions of those tools for internal purposes, they
immediately begin to close—first by being brought entirely behind the secu-
rity firewall, then often with further restrictions on access. For instance, when
a plan was made to turn the US A-Space into i-Space, thus opening it to offi-
cials beyond analysts, it foundered, not on technical difficulties but on exemp-
tions from some security rules that the grantors were unwilling to extend to a
wider set of officials, regardless of their clearance levels.
In one sense, the challenge that intelligence faces is not so different from
that confronted by private industry in trying to increase internal collabora-
tion through social media. Industry too tends for obvious reasons to close
down the media when they adapt them. Yet in some instances, companies
were able to draw on more open processes in fashioning internal arrange-
ments. For instance, MITRE created “Handshake” primarily as a way for the
US Department of Homeland Security to reach out to states and local authori-
ties.28 It then, however, found the process useful internally as well, and the sys-
tem continues to be open to outside “friends.” It is much harder for intelligence
agencies to leverage external connections to build internal ones. Also, rules are
more inhibiting for intelligence than for private industry. The plan to convert
A-Space foundered on what is called “ORCON”—or originator controlled.29
That is most notably used by the CIA’s clandestine service, which was reluctant
to widen the exemptions it had granted for A-Space to a much larger i-Space
audience. So too, Intelink has a distribution caveat—NOFORN, meaning that
distribution to non-US citizens is prohibited. But NSA is very careful about
sharing with its “five eyes” partners—Britain, Canada, Australia, and New
Zealand—and so NSA officers are unlikely to seek access to Intelink or use it
if they have it.
Rules intertwine with organizational culture and process. The RAND
studies found that the use of internal collaborative tools, like blogs and
Intellipedia, was mostly confined to a small group of enthusiasts. Not only
do they not feel encouraged by their superiors to use those tools but they also
feel that they pay a price for doing so. One CIA analyst, a committed blogger,
On MITRE’s use of social media, dating back to 1994, see Bill Donaldson et al., “MITRE
28
Corporation: Using Social Technologies to Get Connected,” Ivey Business Journal (June 13,
2011).
29
Formally, Dissemination and Extraction of Information Controlled by Originator.
Challenges for Intelligence 153
said: “I’m practically un-promoteable.” So far, using social media tools to col-
laborate happens around the edges of existing processes for producing fin-
ished intelligence, which remain stovepiped and branded both by agency and
author. Indeed, there remains a sense that those who have time to blog can’t be
first-rate analysts, for if they were, they’d be engaged in the traditional produc-
tion process. Until that sense changes, the use of collaborative tools among
analysts will remain marginal, mostly bubbling up from the bottom.
Collaboration through social media seems likely to remain marginal so
long as the existing and traditional processes for producing “finished” intel-
ligence remain in place. Each intelligence agency has a culture and an estab-
lished workflow for producing finished intelligence. Broadly, the culture and
workflows reflect the organization, beliefs, incentives, analytic tradecraft as it
is taught and practiced, available tools, and appetite for change of any given
agency. One blogger in the RAND study who was skeptical about the tradi-
tional process for producing finished intelligence described it as “collect traf-
fic, write paper, publish, repeat.” Another emphasized the “push” nature of
dissemination by describing it as “fire and forget.” Yet as long as the nature of
finished intelligence doesn’t change, a fair question is how much collaboration
makes sense and why.
In imagining what a different future might look like for intelligence anal-
ysis and its connection to policy, a prototype produced by the US National
Geospatial Intelligence Agency (NGA) called Living Intelligence is provoca-
tive. It aims to merge the virtues of crowd-sourcing with agency vetting and to
reduce duplication in the process. 30 It would use Google Living Story software,
which was developed for a 2009–2011 experiment involving Google plus the
New York Times and Washington Post. Every topic would have its own uniform
resource locator (URL). At the top of the page for each “story” would be a sum-
mary, and below that a timeline, which the user could move back and forth. On
the left side of the page would be the filters, letting users drill down to the level
of detail they sought. On the right is a time sequence of important events. In the
center is the update stream that keeps track of the entire story. Once a user has read
a piece, that piece grays out, so the user need not read it again. The scheme keeps
repetition to a medium. For intelligence, it would help to distinguish between
useful tailoring for different audiences and the “stock” story merely repeated.
Finally, the content would be fully vetted by the contributing agencies,
thus diminishing the worry that content on collaborative tools is second-rate
or less reliable. Using WordPress and MediaWiki (the same software used by
Wikipedia and Intellipedia), the page would use grayed versus lit agency icons,
30
For a video explaining the idea, see http://www.youtube.com/watch?v=9ft3BBBg99s&fe
ature=plcp.
154 National Intelligence and Science
plus color coding, to make clear which contributions to the topic had been vet-
ted and cleared at which level of the contributing agencies. Both the software
programs permit geospatial location, so the topic page would add a spatial
dimension as well. The hope behind Living Intelligence was that this form of
collaboration would encourage agencies to play to their strengths rather than
try to do the entire story themselves. In a more distant future, it is possible to
imagine policy officials contributing to the story as well, with their additions
clearly marked as such; for critical parts of the intelligence mystery—what
might drive foreign leaders, for instance—those policy officials often know
more than intelligence analysts.
The vision of Living Intelligence opens a much wider set of issues about
what intelligence analysts produce and how they interact with policy officials.
For all the change in IT, the intelligence community still tends to think of its
outputs as a commodity—words on paper or bytes on a computer screen. The
language of commodity permeates: those policy officials on the receiving end
are still referred to as consumers or, worse, customers. And while the nature
of the interaction between intelligence and policy has changed from the days
when, figuratively at least in American practice, intelligence analysts threw
publications over the transom to policy officers, that interaction is still thought
of as pretty standoff-ish lest intelligence become “politicized,” the subject of
Chapter 8. 31 In fact, the nature of the interaction has changed. For instance, a
2010 study by the CIA’s Sherman Kent School of two dozen intelligence suc-
cesses found that briefings or conversations were the only “delivery” mode
present in all cases. 32
In fact, while the intelligence community still tends to think it is in the pub-
lication business, it really is in the client service business. Paraphrasing the
title of an important recent article, intelligence officers synthesize for clients,
they don’t analyze for customers, and intelligence needs to conceive of its busi-
ness in that way. 33 The “product” is advice, not a commodity. The process needs
to be an ongoing relationship, not a transaction. “Client” is not ideal language,
but it does connote services provided by more or less equals. It has the flavor
of money about it, and client relationships can be episodic, when needed, not
31
I ndeed, many young people no doubt have never seen a transom or have any idea what
one is!
32
The title of the report, “Lessons Learned from Intelligence Successes, 1950–2008 (U),”
(Kent School Occasional Paper, 2010), and its major findings are not classified, but the report,
alas, is, probably because of some of the details of the cases. Another major finding of the paper
provides somewhat inadvertent support for the idea of the intelligence-policy relationship as an
ongoing, interactive one: in something like three-quarters of the cases, the first CIA conclusion
was diffident or wrong.
33
Josh Kerbel and Anthony Olcott, “Synthesizing with Clients, Not Analyzing for Customers,”
Studies in Intelligence 54, no. 4 (2010): 11–27.
Challenges for Intelligence 155
I think the most satisfying part was there was a very clear sense through
the relationship with the briefer . . . that was a medium through which
we could define our interests and areas of concern, and that requests for
information, clarification, follow-up could be pursued. And I thought
that was very effective . . . we were extremely well served. . . 34
34
Quoted in Gregory F. Treverton, “The “First Callers”: The President’s Daily Brief (PDB)
across Three Administrations,” (Center for the Study of Intelligence, forthcoming).
35
The team’s work was published as “Probing the Implications of Changing the Outputs
of Intelligence: A Report of the 2011 Analyst–IC Associate Teams Program,” in Studies in
Intelligence 56, no. 1 (Extracts, 2012).
156 National Intelligence and Science
By the same token, in principle, social media, especially wikis but perhaps also
Facebook and others, would seem the opening wedge for rethinking outputs.
Wikis seem tailor-made for intelligence. Not a static product, they are living doc-
uments, changed as new evidence surface and new ideas arise. They let experts
come together while permitting interested non-experts to challenge views. And
throughout, they provide rich metadata about where evidence came from and who
added judgment. Yet a recent RAND analysis for the CIA Center for the Study of
Intelligence found no instance of wikis being used to produce mainline products.
The closest was the CIA’s Open Source Works (OSW), and there the wiki is still
more a way of warehousing what the organization knows than of producing any-
thing for an external audience. And second thoughts about implications might
argue against too-quick an embrace of wikis as production forms—not that such
an embrace is likely in any case. There is a legal requirement for documents of
record as part of the decision process, but that requirement could presumably be
met through technology if the state of the wiki could be reconstructed for any
point in time. However, other questions arise, ones whose answer is less obvious
than it seems. Who would get to contribute to the wiki? Just analysts working the
account? All analysts? Non-analysts as well? If the last, what criteria, if any, would
be applied for access? What about cleared outsiders? Or policymakers?
Yet, short of wikis for producing intelligence, future clients of intelligence
will want to receive intelligence on their iPads, if they do not already. They will
want to have, though iPads or similar technology, the conversation that senior
officials have with their PDB briefers, asking questions, getting answers on the
spot, or getting additional analysis soon thereafter. Using these new technolo-
gies, however, cuts across how most intelligence services do their business. That
is most visible in quality control: one of the reasons that intelligence has clung so
tenaciously to words on paper or bytes on a screen is that they can be subjected
to a careful process of quality control (leave aside for the moment whether those
processes produce better documents or just more vanilla ones). Such control
is not possible for analysts answering questions by iPad more or less on the fly.
Those PDB briefers prepare to answer the most probable questions, and come
to the briefings with notes in order to do so. A less formal process would require
empowering analysts. The quality assurance would rest on the people, not their
products. And those people would also be the outputs of intelligence.
A pioneering article by Calvin Andrus almost a decade ago described the
role of the analyst by analogy to the change on the battlefield. 36 Intelligence
Calvin Andrus, “Toward a Complex Adaptive Intelligence Community: The Wiki and the
36
Politicization
Disseminating and Distorting Knowledge
1
Oxford Dictionaries uses as examples the phrases “wage bargaining in the public sector
became more politicized” and “we successfully politicized a generation of women.”
158
Po l i t i c i z a t i o n 159
Nomination of Robert M. Gates, Hearings before the Select Committee on Intelligence of the
2
United States Senate (Washington, DC: US Government Printing Office, 1992), pp. 510–511.
3
Mark M. Lowenthal, “A Disputation on Intelligence Reform and Analysis: My 18 Theses,”
International Journal of Intelligence and CounterIntelligence 26, no. 1 (2013): 31–37. Lowenthal
states as his first thesis that the intelligence community exists primarily to provide analysis to
policymakers and that it has no meaning if it does not act in this supportive role. It has, Lowenthal
underlines, no independent self-sustaining function.
4
The best-described case is perhaps the West German Bundesnachrichtendienst under the long
reign of General Gerhard Gehlen; for a brief summary, see Laqueur, A World of Secrets: The Uses
and Limits of Intelligence (London: Weidenfeld and Nicolson, 1985), pp. 212–219. On the history of
Bundesnachrichtendienst (BND), see Hermann Zolling and Heinz Höhne, Pullach Intern: General
Gehlen Und Die Geschichte Des Bundesnachrichtendienstes (Hamburg: Hoffmann und Campe, 1971)
and Erich Schmidt–Eenboom, Der Schattenkrieger: Klaus Kinkel Und Der BND (Dusseldorf: ECON,
1995).
Po l i t i c i z a t i o n 161
Between these two extremes of irrelevance and merging with power lies the
complex web of constitutional, professional, and ethical balances that consti-
tutes the core of the politicization issue, not only in intelligence. This compli-
cated terrain is explored in more detail in a chapter by Treverton. 5
5
See Gregory F. Treverton, “Intelligence Analysis: Between ‘Politicization’ and Irrelevance,”
in Analyzing Intelligence: Origins, Obstacles, and Innovations, ed. Roger Z. George and James
B. Bruce (Washington: Georgetown University Press, 2008).
162 National Intelligence and Science
For a systematic comparison between the US and British intelligence systems, see Chapter 1
6
of Phillip H. J. Davies, Intelligence and Government in Britain and the United States, Vol. 1: The
Evolution of the U.S. Intelligence Community (Santa Barbara, CA: Praeger, 2012).
7
See John Prados, The Soviet Estimate: U.S. Intelligence Analysis & Russian Military Strength
(New York: Dial Press, 1982).
8
For the Bomber and Missile Gap, see Ibid. and Intentions and Capabilities. Estimates on
Soviet Strategic Forces, 1950–1983, ed. Donald P. Steury (Washington, DC: Center for the Study
of Intelligence, Central Intelligence Agency, 1996.).
9
One example is Swedish defense planning in the late 1960s that simply scaled down the
Soviet threat to manageable dimensions, based not on intelligence assessments but on a crude
common sense argument: Assuming that the Soviet Union and its allies could not resort to
nuclear weapons in the initial stages of a war, their conventional forces had to be concentrated
toward the main adversary. Neutral Sweden then only had to bother about the forces left on
the margin, hence the term “the Marginal Effect Doctrine”—not an unusual position for small
countries trapped between major powers. This was a kind of political pseudo-intelligence assess-
ment, based on the assumptions that the Soviets did not plan to launch nuclear strikes (which was
wrong) and that they would use their forces according to Swedish logic (which was even more
wrong). Any diverging views that existed within the intelligence communities were effectively
deterred from coming out in the open.
Po l i t i c i z a t i o n 163
its ideology. These estimates were not only politically relevant but also inescap-
ably political and ideological by the very nature of the subject matter. Many
of these conflicts within the US intelligence community were focused on the
national intelligent estimate process and the changing organizational setting
and procedures. Based on the declassified relevant NIEs, Lawrence Freedman
has studied the relationship between the estimating process, strategic policy,
and the politicization of the intelligence community in the 1960s and 1970s.10
Freedman notes that with increasing political polarization during the 1970s
over the interpretation of the Soviet threat, intelligence as an independent voice
played a diminishing role. The reason, according to Freedman, was the efforts
by senior policymakers like Secretary of State Henry Kissinger to get more
useful intelligence products, with more focus on raw data and more transpar-
ency about areas of disagreement. The result was a paradox where intelligence,
instead of becoming more relevant, counted for less in a process where the poli-
cymaking clients “were able to assess the material according to their own preju-
dices and predilections.”11 This was an example of the “cherry-picking” practice.
The culmination of this decline in authority came in the famous Team A/Team
B exercise, where the CIA was pitted against a coalition of neo-conservative
analysts on not very equal terms; Team A followed normal NIE-standard, was
heavy footnoted, and contained contrary opinions, something that the Team B
report shunned, and where no dissent was allowed.12
10
L awrence Freedman, “The CIA and the Soviet Threat: The Politicization of Estimates,
1966–1977,” Intelligence and National Security 12, no. 1 (1997): 122–142.
11
Ibid., p. 135.
12
Ibid., pg. 136.
164 National Intelligence and Science
Wesley Wark, The Ultimate Enemy: British Intelligence and Nazi Germany, 1933–1939
13
for some ground for optimism when all lights flashed red. In drafting the 1939
strategic assessment, the air staff member of the Joint Planning Sub-Committee
(JPC) wrote a note to his colleagues arguing that the picture in the draft was too
gloomy. And he stressed that there was considerable evidence that “Germany’s
belt is already as tight as she can bear” and that Germany, through achieving
the current advantage in initial military strength, had used up all its hidden
resources.14 As we know today, the remark of the group captain was both right
and wrong—right in the sense that Germany indeed did not have the sufficient
resources for a prolonged war, and especially not before conquering most of the
European continent, but fatally wrong because Germany had no intention of
waging such a prolonged war, instead intending to bypass the Maginot Line and
the resource threshold with the Blitzkrieg concept.
No equivalent to Churchill’s broadsides in the House of Commons
appeared in the close relations between intelligence and foreign policy. In
fact, as Wark notes, at no stage during the 1930s were there any fundamen-
tal contradictions between intelligence reporting and the foreign policy of the
government.15 While this could be regarded as final proof for the lack of politi-
cization of intelligence, the obvious failure of British foreign policy in face of
Nazi expansionism nevertheless indicates something different. Either there
was simply a massive or several successive intelligence failures, or intelligence
was influenced, if not outright dictated, by the virtue of political necessity. If
Churchill’s reading of the gathering storm was justified by that fact that he
ultimately proved to be right, the docile intelligence assessments in themselves
raise questions since appeasement ultimately failed. Wark concludes that the
main negative impact of intelligence was the tendency to supply assessments
that seemed to confirm the present line of policy and thus reassure rather than
challenge the fundaments of the dogmatic appeasement policy.16 Intelligence
did not cry wolf in time, simply because policy did not require any such cries;
on the contrary, appeasement demanded that there be no wolves in sight.
14
Ibid., p. 226.
15
Ibid., p. 235.
16
Ibid., p. 236.
166 National Intelligence and Science
be true to the ideals of the analyst’s profession: “Seeking truth is what we are all
about as an institution, as professionals, and as individuals, the possibility—
even the perception—that that quest may be tainted deeply troubles us, as it
long has and as it should.” Gates wanted every agency employee from the DCI
down to “demonstrate adherence to the principle of integrity on which objec-
tive analysis rests, and civility, which fosters a trusting, creative environment.”17
The speech was made not only against a background of the conflicts related ear-
lier but also in the context of what might be called a widespread self-deterrence
within intelligence.
This self-deterrence is also visible as an underlying pattern in the tendency
of British intelligence to play down the German estimate in the 1930s, but not
in the sense that senior managers within the intelligence system held doubts for
themselves or constrained the assessment process. Rather the self-deterrence
was internalized to the effect that conclusions that would challenge policy sim-
ply were avoided. An effort to identify any manipulation in terms of explicit
orders for the intelligence to comply with policy would probably come to noth-
ing, or as Wark summarizes: “If there was any candidate for Churchill’s hid-
den ‘hand which intervenes and filters down or withholds intelligence from
ministers’ it was to be found of the process of analysis within the intelligence
community itself, especially in the early years of the 1930s.”18 Nor did inves-
tigations two generations later of US and British estimates in the Iraqi WMD
affair find evidence of such overt interference by policy in intelligence.
The self-deterrence concerning policy embedded in the analytic process
is highlighted by Robert Jervis in his discussion of the failure of US intelli-
gence to foresee the fall of the Shah in the 1979 revolution. In Why Intelligence
Fails, Jervis reflects over his own in-house CIA investigation shortly after the
Iranian revolution, and compares this with the equally important but very dif-
ferent Iraqi-WMD intelligence disaster.19 In the Iranian case, US intelligence
was simply caught ill-prepared, with few analysts assigned, limited linguistic
capacity, and over-reliance on intelligence from the regime itself. The agency
had few sources within the regime or the relevant clerical opposition. But
even if the intelligence process on regime stability drove in low gear, the main
obstacle to warning was two filters, one cognitive and the other consisting of
self-deterrence for politicization in a period when intelligence authority was at
its lowest. The cognitive filter was the unchallenged assumption that the Shah,
17
R obert M. Gates, “Guarding against Politicization,” Studies in Intelligence 36, no. 1
(1992): 5–13.
18
Wark, The Ultimate Enemy, p. 237.
19
Jervis, Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War (Ithaca,
NY: Cornell University Press, 2010).
Po l i t i c i z a t i o n 167
20
For US intelligence assessments, incremental changes in the perceptions and the dissent-
ing views, see Center for the Study of Intelligence, At Cold War’s End: US Intelligence on the Soviet
Union and Eastern Europe, 1989–1991 (Washington, DC: Central Intelligence Agency, 1999).
21
Jervis, Why Intelligence Fails, p. 20.
168 National Intelligence and Science
“Militärpolitik och stridskrafter – läge och tendenser 1975, Överbefälhavaren
22
Specialorientering” (Military Policy and Force Postures—Situation and Tendencies 1975, Supreme
Commander Special Report 1975–01–14) (Stockholm: Försvarsstaben, 1975).
Po l i t i c i z a t i o n 169
his amazement found the unfortunate passage repeated and promptly ordered
its removal. As all typists had left for the day, the only option remaining was
to simply delete the text, leaving a conspicuous empty space, which journalists
immediately discovered just fit the Finnish section in the previous year’s report.
This time the speculation was not over politicization from below, but the
contrary: the visible evidence of political pressure leading to the censoring of
an intelligence product, the incontestable evidence being the demonstrative
way in which the armed forces had carried out the order. After this final detour
into the conspiracy theories of perceived politicization, the whole enterprise
with an unclassified version of the Annual Report was abandoned for good. 23
23
The narrative of the episode is based on oral history. The author (Agrell) worked as a
junior analyst in Swedish Defence Intelligence at the time. The white pages in the 1976 unclas-
sified report, however, constitute a irrefutable historical remnant of the debacle and speak for
themselves.
24
For a scholarly study making this case, see Liesbeth van der Heide, “Cherry-Picked
Intelligence. The Weapons of Mass Destruction Dispositive as a Legitimation for National
Security in the Post 9/11 Age,” Historical Social Research 38, no. 1 (2013): 286–307. Also John
N. L. Morrison, “British Intelligence Failures in Iraq,” Intelligence and National Security 26, no. 4
(2011): 509–520.
25
United States Congress—Select Committee on Intelligence, Report on the U.S. Intelligence
Community’s Prewar Intelligence Assessments on Iraq (Washington: United States Senate, 2004);
B. Hutton and Great Britain, Parliament, House of Commons, Report of the Inquiry into the
Circumstances Surrounding the Death of Dr. David Kelly C.M.G. (London: Stationery Office, 2004);
Butler Commission, Review of Intelligence on Weapons of Mass Destruction (London: Stationery
Office, 2004).
170 National Intelligence and Science
31
Betts, Enemies of Intelligence, pp. 115–116.
32
Jervis, Why Intelligence Fails, p. 147.
33
The most prominent case was the Iraqi HUMINT source with the ill-fated covername
“Curveball.” See Bob Drogin, Curveball: Spies, Lies, and the Con Man Who Caused a War
(New York: Random House, 2007).
34
Jervis, Why Intelligence Fails, pp. 150–153.
35
I bid., pp. 149–150. Though, as he remarks: “There is no such thing as ‘letting the facts speak
for themselves’ or drawing inferences without using beliefs about the world, and it is inevitable
that the perception and interpretation of new information will be influenced by established ideas.”
172 National Intelligence and Science
Davies, Intelligence and Government in Britain and the United States, Vol. 1: The Evolution of
37
had become one in which intelligence services in many instances had to make
their work and input relevant.39 From this perspective, the surge for the Iraqi
estimate from spring 2002 was simply an all-time high in customer-pull and pro-
ducer euphoria. This sudden fading moment of overlapping interests is perhaps
best caught in the memo from Tony Blair’s press secretary Alastair Campbell
to the chairman of the Joint Intelligence Committee (JIC), Mr. Scarlett, in
which Campbell expressed his gratitude for the services rendered by “your
team” in producing the text for the public dossier to be known as the September
Report: “I was pleased to hear from you and your SIS colleagues that, contrary
to media reporting today, the intelligence community are taking such a helpful
approach to this in going through all the material they have.” In the preceding
line Campbell sums up the changing role of intelligence: “The media/political
judgement will inevitably focus on ‘what’s new?’ ”40
39
For a discussion of this process in the United States, see Gregory F. Treverton, Reshaping
National Intelligence for an Age of Information (New York: Cambridge University Press, 2003),
pp. 197–202.
40
Hutton and House of Commons, Report of the Inquiry into the Circumstances Surrounding
the Death of Dr. David Kelly C.M.G., p. 109.
41
Karl Raimund Popper, The Open Society and Its Enemies, Vol.1: The Spell of Plato (1945), and
Vol. 2: The High Tide of Prophecy: Hegel, Marx and the Aftermath (London: Routledge and Kegan
Paul, 1947).
174 National Intelligence and Science
42
I n 1948, the Soviet geneticist Rapoport tried to defend the chromosome theory but was
refuted with quotations from a speech by Molotov. Rapoport asked why his opponent thought
that Molotov knew more about genetics than he did, a question that led to his expulsion from the
Communist Party and dismissal from his post. Z. A. Medvedev and T. D. Lysenko, The Rise and
Fall of T.D. Lysenko (New York: Columbia University Press, 1969), p. 122.
Po l i t i c i z a t i o n 175
farm.43 The far more common case of politicization from below occurs when
scientists frame their research in policy-relevant terms, thereby attracting
media attention, funding opportunities, and possibilities to influence policy,
precisely the kind of distortion that Gates warned against in his speech to
the CIA analysts.
The field of gravitation surrounding especially large-scale research funding
constitutes a powerful ground for bias, if not in the very results then in the
selection of research themes and project design. In intelligence, this bias is to
some extent internalized in the concept (or axiom) of the intelligence process,
where the intelligence requirements are not decided by the agencies them-
selves, and certainly not by the individual managers or analysts, but are the
prerogative of the superiors, the policymakers, or the operators. Intelligence,
as Mark Lowenthal has noted, has no self-sustaining function; it exists only
to serve decision makers.44 It could be argued, however, that the transform-
ing role of intelligence and the increasingly complex problems and targets it
must address, as discussed throughout this book, actually constitute a basis for
the development of an intelligence equivalent of basic science in some specific
fields.45
43
Ibid., pp. 233–236.
44
Mark M. Lowenthal, “A Disputation on Intelligence Reform and Analysis: My 18 Theses,”
International Journal of Intelligence and CounterIntelligence 26, no. 1 (2013): 31–37.
45
A field where the need for basic research has long been acknowledged is cryptology, where
the lack of long-term efforts could result in losses of access that are hard or practically impossible
to regain.
46
Solly Zuckerman, “Beyond the Ivory Tower: The Frontiers of Public and Private Science”
(New York: Taplinger, 1971), p. 106.
176 National Intelligence and Science
Shlomo Gazit, “Intelligence Estimates and the Decision-Maker,” in Leaders and Intelligence,
47
50
See Chapter 10 of Davies, Intelligence and Government in Britain and the United States, Vol. 1.
51
Jervis, Why Intelligence Fails, p. 125.
52
I nterview of Michael Herman, BBC Panorama, July 11, 2004 quoted in John L. Morrison,
“British Intelligence Failures in Iraq,” Intelligence and National Security 26, no. 4 (2011): 509–520.
178 National Intelligence and Science
Joint Intelligence Committee, Iraq’s Weapons of Mass Destruction: The Assessment of the
53
58
Björn Fjaestad, “Why Journalists Report Science as They Do,” in Journalism, Science and
Society: Science Communication between News and Public Relations, ed. Martin W. Bauer and
Massimiano Bucchi (New York: Routledge, 2007). For scientific promotion in media, see
Wilhelm Agrell, “Selling Big Science: Perceptions of Prospects and Risks in the Public Case for
the ESS in Lund,” in In Pursuit of a Promise: Perspectives on the Political Process to Establish the
European Spallation Source (ESS) in Lund, Sweden, ed. Olof Hallonsten (Lund: Arkiv 2012).
59
Göpfert, “The Strength of PR and Weakness of Science Journalism.”
60
Dorothy Nelkin, Selling Science: How the Press Covers Science and Technology
(New York: Freeman, 1995).
61
Ibid., pp. 7–8.
62
Fjaestad, “Why Journalists Report Science as They Do,” p. 124.
180 National Intelligence and Science
effect of scientific “mystique” and increased pressure to employ media for stra-
tegic promotion. In the case of Interferon, the protein presented as a break-
through in cancer treatment in the early 1980s, the scientists, far from being a
neutral source of information, actively promoted Interferon and thus shaped
the flawed media coverage.63 When Interferon failed to deliver the promised
results, the whole campaign backfired into a media-rich scientific scandal.
Nobel Prize winner Kenneth Wilson summarized the key elements in
selling Big Science in a statement after successfully convincing the National
Science Foundation to support the supercomputer program in the mid-1990s.
His arguments were not scientific but political and ideological: without the
program the United States would lose its lead in supercomputer technology.
Wilson was amazed at how the media picked up on what he described as “a
little insignificant groups of words” and he concluded: “The substance of it all
[supercomputer research] is too complicated to get across—it’s the image that
is important. The image of this computer program as the key to our technologi-
cal leadership is what drives the interplay between people like ourselves and
the media and force a reaction from Congressmen.”64
A simple and idealistic linear model of science communication seems to
help very little for the understanding of these inter-linked transformations in
science coverage in the media, scientific lobbying, and the shaping of public
perceptions of science as a whole and of specific projects in particular. Indeed,
the literature on the subject, the scientific/media interface, seems to revolve
around four main themes:
63
Nelkin, Selling Science, p. 6.
64
I bid., p. 125, quoted from the New York Times, March 16, 1985.
Po l i t i c i z a t i o n 181
operations and their doubtful ethics and legality. But the change after 2003
has been a more profound one, focusing not only on surveillance, citizen
rights, and ethics of certain operations but on the core of the intelligence pro-
cess—the quality, trustworthiness, and relevance of the intelligence flow and
subsequent assessments. Intelligence has not only come in from the cold but
has entered the heat of media scrutiny, in many ways far more ill-prepared than
contemporary scientists. With this said, the four themes listed earlier have
their obvious equivalence in intelligence:
On December 3, 2010, the Norwegian Security Service (PST) received a list from
the Customs Directorate with names of 41 people who had purchased products
on the Internet from the Polish firm Likurg, which specialized in chemical sub-
stances and pyrotechnics. The list received little attention; no check was made
against the security service files or other sources in subsequent months, and the
list was still awaiting processing when a twin terrorist attack struck the Oslo area
on July 22, 2011. Only afterward was it discovered that one of the names on the
list was that of Anders Behring Breivik, the arrested perpetrator of the attacks that
claimed 77 lives in a massive car bombing of government headquarters in central
Oslo and a subsequent systematic massacre of defenseless teenagers at a political
youth camp on the tiny island of Utøya. The Likurg list thereby was predestined to
attain the same status and provoke the same contrafactual disputes as the pre-911
FBI Phoenix Memo and the pre-Pearl Harbor Japanese Wind Warning cable.
1
The terrorist attacks and the efforts of the Norwegian authorities to prevent and handle
the consequences were investigated by a government commission, the 22 July Commission, pre-
senting its findings in August 2012. The main report is in Norwegian, but there is an abridged
summary in English. References here are to the main report NOU 2012:14, available at http://
www.22julikommisjonen.no/. The 22 July Commission, “Findings of the 22 July Commission “
(Oslo, 2012).
183
184 National Intelligence and Science
to initiate a global effort to monitor and control the transfers of substances and
components that could be used for constructing improvised explosive devices
(IEDs), a threat that was spreading from the conflict areas in the Middle East
and Central Asia. Norway was far out on the periphery but had nevertheless
suffered losses to IED attacks against its International Security Assistance
Force (ISAF) troops in Afghanistan.
The invitation to take part in Global Shield was also forwarded to the PST,
but here the invitation was mistaken as merely a piece of general information
from colleagues in the customs service. The officer at the Customs Directorate
assigned to the project quickly realized that a full check of the substances on
the Global Shield list would be impossible, due to the large quantities of fertil-
izers handled by the leading Norwegian company. Having only limited time to
devote to the project, the officer, still wanting to make a contribution to proac-
tive counterterrorism, looked for something more manageable to investigate.
One way of limiting the data was to exclude the companies and look at private
imports alone, based on the presumption that it was here that suspicious trans-
action could be found. Private imports were also relatively easy to separate
from the rest. A first list, containing eight names, was found and sent to the
PST in October. This first batch of names was checked, but no matches were
found with names in the Security Service files.2
At the Customs Directorate the officer in charge continued his efforts. In
the subsequently available information in the Global Shield database he found
that a Norwegian citizen had imported some of the monitored substances
from what appeared to be a private person in Poland. Looking closer into this,
the officer discovered that this in fact was a company named Likurg, with a
homepage in Polish, English, and Norwegian. Suspecting that there could be
more transactions, he checked against the Norwegian currency register and
found 63 transactions by 41 individuals. This was the Likurg list forwarded
to the PST in early December 2010. The list only contained information on
the currency transactions, not the nature of the purchases. Anders Behring
Breivik’s transaction was the smallest amount on the list, 122 Norwegian
crowns (equal to less than 20 USD). Breivik had started his systematic pur-
chases of bomb components and associated equipment in October 2010,
and it was a pure coincidence that his single purchase from Likurg made in
November happened to coincide with the customs officer’s check in the cur-
rency registry. The item itself, however, was a critical component—the fuse
that eventually was used to detonate the 950 kg car bomb on July 22, 2011.
After the first check in October, the PST did not take any further action,
and the subsequent discovery of the Likurg transaction provoked no response.
2
Ibid., p. 374.
Beyond the Divide 185
The reason was a combination of heavy workload and the internal inability to
decide whether the case belonged to the counter-proliferation branch or the
counterterrorism branch of the security service. The case was finally given to
an arms and weapons intelligence analyst at the counterterrorism branch in
late April 2011, but by that time the analyst was due for a long leave. When the
bomb exploded, the documents from December 2010 were still unprocessed. 3
After the attack, the PST was faced with the awkward question of assessing
the significance of the unprocessed list. With hindsight it was inevitable that
the potential significance of the December 3 list, with the future perpetrator’s
name on it, should be a crucial point in a postmortem. Would it have been
possible to prevent the successful preparation and subsequent execution of
the attacks with a more vigilant stance, if available leads had been pursued
and the right questions posed? In a self-evaluation by the PST, published in
March 2012, the service devoted considerable effort to show that even with
a timely analysis of the material from Global Shield the preparations under
way by Breivik would not have been uncovered. He had no Security Service
file, no significant criminal record, and no deviating activities on the Internet.
Besides that, the Security Service was prevented from taking any action
against an individual if there were no specific hypothesis that motivated such
a move.4
Would a more scientific approach to this specific intelligence challenge have
made any difference? And would scientists have been more capable of maneu-
vering through the fog of uncertainty surrounding weak signals? No doubt the
PST, like so many other intelligence and security services in similar predica-
ments, was simply unfortunate in getting tangled up in the all-too-well-known
interaction of information overload, compartmentalization, and legal caveats.
After the attack it was left fending off criticism, using the familiar lines that the
list would not have mattered anyway, given the standard operating procedures
and things being as they were. But a scientific analysis would have been faced
with the same kinds of blind spots and barriers; the same friction between dis-
ciplines, institutions, and research groups; the same prevalence of dominating
paradigms resulting in the same uphill battle for anyone trying to pursue a lead
that by definition was a dead end, provided that the entire set of established
theories was not invalid and that the impossible in fact was the most probable
3
Ibid., p. 378.
4
The PST (Norwegian Security Service) writes: “3 December 2011 there was, according
to our assessment, no valid ground for initiating a full entry on Anders Behring Breivik and
the other 40 persons on the Customs Directorate list. There was no working hypothesis to
relate such an entry to, and the entry in itself would not have met the standards for relevance
and purpose” (our translation). PST Evalueringsrapport (PST Evaluation Report) (Oslo: PST,
March 16, 2012), p. 23.
186 National Intelligence and Science
explanation. If anything deviated from the expected pattern, it is not the fail-
ure of PST to follow an odd lead but rather the fact that this odd lead was dis-
covered at all. The search was inventive—and in some respects possibly legally
doubtful—independently looking for something the Customs official could
not imagine, driven as he later explained to the commission by a combination
of duty and curiosity.
As we discussed in chapter 3, the positivist “Jominian” approach is to
eliminate uncertainty and to find the right answer, while a “Clausewitzian”
approach accepts uncertainty as the basic precondition, be it in war, intelli-
gence assessment, or scientific inference. The 122 Norwegian crown purchase
on the Likurg list was from the Jominian perspective insignificant informa-
tion without any meaningful bearing on the prevailing threat paradigm and
the body of information and assessments that constituted it. The major error
was perhaps not overlooking the list or even failing to discover the system-
atic preparations for the attack, but the lack of any appreciation that ambigu-
ous pieces of critical information were bound to be overlooked, and that the
systematic efforts of the intelligence machinery to eliminate uncertainty in
fact transformed it into a mode virtually primed for disaster. Ignorance, as
Mark Phytian notes, is different from uncertainty, where we know that there
is something we have incomplete or inaccurate knowledge of. Ignorance is
the unknown unknown, the fact of not knowing what we don’t know. It is
boundless and virtually leaves us in the dark. 5
Every knowledge-producing system faced with assessments of probabili-
ties and risk, whether in intelligence, civil contingency, cross-disciplinary
research, or investigative journalism, is likely to fail from time to time when
faced with deviation from known or expected patterns. But for the Likurg
list, things looked different. The only possible outcome was bound to be
refutation through lack of confirmation. The system had, as the PST, possi-
bly unintentionally, disclosed in the self-evaluation, effectively closed every
door for another outcome: only a successful terrorist attack would alter the
track record–based assessment that domestic right-wing extremism did not
pose a threat to the state and its institutions.6 Crucial to the fate of the Likurg
list was thus not the inability to take action or the methods to analyze and
draw inferences from the available information, but something far more fun-
damental—the lack of a hypothesis. The list was an answer without a ques-
tion, and what was worse, without even a remote possibility of framing such
Phythian, “Policing Uncertainty: Intelligence, Security and Risk,” Intelligence and National
5
a question. Facing the unthinkable, the unknown unkown, was in this case
simply unmanageable.7
As Robert Jervis found in his internal postmortem on the failed predic-
tions of the Shah’s downfall 1978, the CIA analysts thought they were operat-
ing with a testable hypothesis on regime stability, while in fact the only way
to refute the assessment of stability was the actual fall of the regime, a con-
tingency lying outside the prevailing paradigm. Faced with these methods to
combat uncertainty, any black swan is bound to be not only undetected but
also virtually undetectable. No perpetrator or hostile actor is safer than the
one operating in this dead zone of intelligence, where only bad luck in the form
of the upside of Clausewitzan friction could change the otherwise inevitable
march toward disaster for the victim, a situation that by definition is character-
ized by the sudden appearance a of low probability–high impact event and the
breakdown of cognitive structures and frames of reference. 8 Knowledge here
tends to become a bubble, a body of more or less solid information and threat
assessments, but one surrounded by an undefined entity of ignorance, of the
unknown unknown, or perhaps even unknowable unknown. Without enter-
ing into the contrafactual blame game so tempting in cases like this, it is hard
to avoid the conclusion that the main fallacy prior to the July 22 attacks was
not the lack of adequate information or the limits of intelligence estimates, but
something far more fundamental: the invisibility of a potentially disastrous
ignorance of emerging threats as such.
7
For a further discussion of intelligence aspects, see Wilhelm Agrell, “The Black Swan and Its
Opponents. Early Warning Aspects of the Norway Attacks on July 22, 2011” (Stockholm: National
Defence College, Center for Asymetric Threat Studies, 2013).
8
For the chaotic nature of crisis, see Arjen Boin et al., The Politics of Crisis Management: Public
Leadership under Pressure (Cambridge: Cambridge University Press, 2005), pp. 2 ff. Crises can
be defined as largely improbable events with exceptionally negative consequences; see Tom
Christensen, Per Lægreid, and Lise H. Rykkja, “How to Cope with a Terrorist Attack?—a
Challenge for Political and Administrative Leadership,” in COCOPS Working Paper No 6 (2012),
p. 4, available at http://www.cocops.eu/publications/working-papers.
188 National Intelligence and Science
For a nice primer on activity-based intelligence, see Kristin Quinn, “A Better Toolbox,”
9
TrajectoryMagazine.com 2012.
10
“Activity Based Intelligence Knowledge Management, USD(I) Concept Paper,”
(Washington: Undersecretary of Defense, Intelligence Draft, 2011).
Beyond the Divide 189
that we know what we’re looking for. The process begins with “requirements.”
By contrast, ABI assumes we don’t know what we’re looking for. The cycle
also assumes linearity. Sure, steps get skipped, but the basic model is linear.
Start with requirements, then collect, then process, then analyze, and ulti-
mately disseminate. That sequence also assumes that what is disseminated is
a “product”; it tends to turn intelligence into a commodity. Even the best writ-
ing on intelligence, like Roberta Wohlstetter’s book on World War II through
the wonderfully written 9/11 report, is linear: “dots” were there, if only they
had been connected. But think about how non-linear most of thought is, and
surely most of creativity. This is what, in ABI, is “sequence neutrality.” In intel-
ligence, as in life, we often may solve a puzzle before we realize the puzzle was
in our minds. Or to return to the analogy with medicine, notice how many
drugs were discovered by accident to be effective for one condition when they
were being used for something else entirely. The discovery was an accident,
hardly the result of a linear production function. Or as in the Likurg case, that
list may appear before the puzzle to which it is the answer is—or perhaps can
be—framed.
Perhaps most important and least helpful, traditional intelligence and the
cycle give pride of place to collection. Collection drives the entire process: there
is nothing to analyze if nothing is collected against the requirements that have
been identified. When analysis or postmortems reveal an intelligence “gap,”
the first response is to collect more to fill that gap. But the world is full of data.
Any “collection” is bound to be very selective and very partial. Perhaps slightly
paradoxically, ABI reminds us that data are not really the problem. Data are
everywhere, and getting more so. China has, by one estimate, one camera for
every 43 people. Data are ubiquitous, whether we like it or not. Privacy, as it
was traditionally conceived, is gone, or going, again for better or worse.
And not only does intelligence privilege collection, it still privileges infor-
mation it collects, often from its special or secret sources. As noted earlier,
the founding fathers of intelligence in the United States (alas, there were few
mothers aside from Roberta Wohlstetter) worried, as Kent put it, that the con-
centration on clandestine collection would deform open collection. And so it
has, in spades. For ABI, by contrast, all data are neutral. There are no “reliable
sources”—or questionable ones. Data only become “good” or “bad” when used.
This neutrality of data is, not surprisingly, perhaps the most uncomfortable chal-
lenge of ABI to the traditional paradigm of intelligence, for so much of the cycle
has been built around precisely the opposite—evaluating information sources
for their reliability. ABI here rather resembles post-modern social science and
humanities dealing with discourse analysis, perceptions, and symbols.
At this point, ABI intersects with “big data.” So much of traditional intel-
ligence analysis has been very limited in information, assuming that only
190 National Intelligence and Science
“exquisite sources” matter. It has developed neither the method nor the prac-
tice of looking for and dealing with large amounts of data, but often ones that
are tangential, or partial or biased in unknown ways. There is lots of informa-
tion out there that doesn’t get used. The promise of “big data” may turn out to
be the analytic equivalent of the technology bubble. But the proposition that
there is lots of it out there to be incorporated is apt. So is the admonition to col-
lectors and analysts that things they see now, without relevance to their cur-
rent concerns, may be important in the future. To be sure, that admonition not
only requires an enormous change in culture from the view that only informa-
tion relevant to my hypothesis matters, but it also imposes a huge knowledge
management challenge. And that, too, is a piece of breaking the paradigm, for
traditional intelligence analysts mostly didn’t conceive of themselves in the
knowledge management business. Imagine a bumper sticker: “big data can
empower non-linear analytics.”
Two other elements of ABI are suggestive in opening the paradigm. ABI
turns on correlation: looking at patterns of activity to determine pattern of life
and so networks. It is agnostic about causation, looking first for how things fit
together. But the idea of correlation also opens up traditional analysis, for the
linear process can all too easily lead to linear thinking—“drivers” impute causa-
tion. But causation in human affairs is a very complicated affair. Imagine if ana-
lysts grew up in a world of correlations—indicators—not drivers that imputed
causation, where none usually obtains. They might be attuned to connections,
perhaps even ones not natural to analysis, or to life. The idea that a busker in
Tunisia who set himself on fire might in turn propel activists in Egypt to action
surely wouldn’t occur but might be less easily dismissed out of hand.
Activity-based intelligence is also inherently collaborative, a challenge to
the practice of intelligence analysis if not necessarily to the paradigm. It is dis-
tinguished from traditional pattern analysis mostly in that while patterns could
be uncovered by information from several intelligence sources, or INTs, most of
the time a single INT was good enough: recall those Soviet T-72 tanks whose
signature was visible to imagery. But multi-INT or ABI has required analysts to
work closely together, often in real time. It has made them realize not only that
other disciplines can contribute to solving their problem but educated them in
how those disciplines work. In a few cells, analysts from different disciplines,
often separated geographically, have employed new tools of social media, like chat
rooms and Intellipedia, to exchange information in real time and build a picture
of what is happening around, for instance, the launch of a North Korean missile.
So imagine a “might have been” and a “might be.” With regard to the Arab
spring, it was hardly a secret that the countries involved were unstable. A gen-
eration of work by the CIA, other services, and their academic counterparts
had created indicators of political instabilities, which were then turned into
Beyond the Divide 191
countries to watch. Those lists tended to be noted by policy officers but were
never found very helpful because they could not specify when or how explo-
sions might happen. The point was a fair one, for from the perspective of tradi-
tional intelligence, the idea of a street vendor setting himself on fire in Tunisia
and the action leading to uprising in a half dozen other countries was a black
swan indeed. Had anyone suggested it, the suggestion would have fallen in the
“outrageous” category and been dismissed as such. It would not, perhaps could
not, have been recognized as the answer to yet another puzzle not yet framed.11
But what if? What if analysts had been looking for correlations, not causa-
tion. Suppose they had been monitoring Twitter or other open sources, and
laying those against other sources both open and secret. The process could
hardly have been definitive. But it might have provided hints, or gotten ana-
lysts to suggest other things to look at. At a minimum, the process might
have helped to weaken the power of mindset—in this case, for instance, that
nothing in Tunisia could influence what happened in Egypt. It might have
been a way to weaken the grip of mindset, by keeping the focus on data.
In the category of “might be,” consider China’s future. It has long
been assumed that, sooner or later, growth would slow and that the social
change driven by economic growth would lead to political change.12 Now,
there is a good argument, though hardly proof, that the timing is “sooner.”
Demographics is one powerful factor. China will soon fall off the demo-
graphic cliff, as the fruits of the one-child policy mean a rapidly aging popula-
tion. And critically, China will begin to age when it is still relatively poor, and
thus growing old for the Chinese will is not be like aging in Japan or Europe.
Moreover, for both economic growth and political change, the key indicator
is for China to reach a per capita income of $17,000 per year in 2005 purchasing
power parity.13 That will occur in 2015 if China continues to grow at 9–10 per-
cent annually, in 2017 if growth slows to 7 percent. Again, correlations are not
predictions, but no non-oil-rich country with a per capita income over that level
is ranked as other than “free” or “partly free” by Freedom House, while China is
now deep in the “non-free” category.14 As China has gotten richer, its people have
become less inhibited in raising their voices, and the Internet provides them
11
For the shortcomings of a traditional intelligence approach in the face of social
changes like the Arab spring, see Eyal Pascovich, “Intelligence Assessment Regarding Social
Developments: The Israeli Experience,” International Journal of Intelligence and Counterintelligence,
26, no. 1 (2013): 84–114.
12
Th is section draws on Gregory F. Treverton, “Assessing Threats to the U.S.,” in Great
Decisions 2013 (New York: Foreign Policy Association, 2012), pp. 101ff.
13
Henry S. Rowen, “Will China’s High Speed Ride Get Derailed?,” unpublished paper, 2011.
14
Henry S. Rowen, “When Will the Chinese People Be Free?,” Journal of Democracy 18, no. 3
(2007): 38–52.
192 National Intelligence and Science
15
Barry Eichengreen, Donghyun Park, and Kwanho Shin, “When Fast Growing Economies
Slow Down: International Evidence and Implications for China,” National Bureau of Economic
Research Working Paper 16919 (2011), available at http://www.nber.org/papers/w16919.
16
Charles Wolf et al., Fault Lines in China’s Economic Terrain, MR–1686–NA/SRF ed. (Santa
Monica: RAND Corporation, 2003).
Beyond the Divide 193
dualism. Strategic intelligence, the ongoing work in the West on the giant puz-
zles of Soviet capacity, was driven by requirements to gradually reduce a recur-
ring uncertainty. Current intelligence during the years of boredom validated
the puzzle, and in the seconds of terror forced urgent reassessment and crisis
management. In the major armed conflicts, urgency dominated. Policymakers
and headquarters and field commanders often had to take rapid decisions based
on incomplete or ambiguous intelligence, for better or worse relying on their
own judgment. The fact that intelligence veterans tended to describe Cold War
intelligence as dull and over-bureaucratized compared to World War II did not
just reflect a preference for glorifying past times. It was a fact.
The war on terror, one affecting virtually every country and in some
respects every individual, altered the entire setting for national security intel-
ligence. The puzzle, if there ever had been one, was shattered and infeasible to
even imagine. Without a stable over-arching threat, puzzles could simply be
laid and re-laid in an endless process that nevertheless did not reflect the com-
plexity and interactive character of the threat.
This profound change from puzzles and validated threat assessments to the
realm of complexities was heralded by the rise of the international terrorist
threat. But post-9/11 terrorism was not the sole cause, only the most visible
aspect of a wider undercurrent, that forced intelligence analysts, technical and
scientific expertise, and national leaders into a new terrain of crisis decision
making characterized by an immense real-time flow of information, vastly
complex issues, and a subsequent urgency to both comprehend and react on
rapidly unfolding events. How could knowledge production and expert advice
be reshaped and framed to function under such contingencies? Or is that task
simply impossible, with intelligence becoming instead a means of trying to
keep up confidence and legitimize the decisions or non-decisions taken for
other reasons in the public arena?
There was no specific emergency pressing the Norwegian Security Service
prior to the July 22 attacks, only the all-too-well-known volume of routine
work and ongoing criminal investigations making any new input unwanted
and unwelcome.17 The circumstances were in this respect entirely different
when the WHO and national health authorities over the world were faced
with the renewed outbreak and rapid spread of the swine-flu (H1N1) in the
spring 2009. In the first outbreak in 1976 the leading scientists overstated the
17
A n external review of the Norwegian Security Service (PST), led by Ambassador Kim
Traavik, concluded that the service to a high degree had concentrated on ongoing matters and
lacked attention for things outside this sphere. The PST had, in the view of the investigation, been
unable to see the broader picture and how it transformed. “Ekstern Gjennomgang Av Politiets
Sikkerhetstjeneste. Rapport Fra Traavikutvalget” (Oslo: Justis– Og Beredskapsdepartementet,
Dec. 3, 2012).
194 National Intelligence and Science
nation med Pandemrix,Medical Production Agency, Uppsala, March 26, 2013.” The study inci-
dentally did not show any increase in the kinds of side effects reported from the 1976 US mass
vaccination.
Beyond the Divide 195
19
Betts, Enemies of Intelligence: Knowledge and Power in American National Security
(New York: Columbia University Press, 2007), pp. 122 ff.
20
Ulrich Beck, Risk Society: Towards a New Modernity (London: Sage, 1992). The book
was originally published in German with the title Risikogesellschaft: auf dem Weg in eine andere
Moderne (Frankfurt: Suhrkamp, 1986). For a discussion of the impact of Beck’s concept, see Joris
Hogenboom, Arthur P. J. Mol, and Gert Spaargen, “Dealing with Environmental Risk in Reflexive
Modernity,” in Risk in the Modern Age: Social Theory, Science and Environmental Decision-Making,
ed. Maurie J. Cohen (Basingstoke: Palgrave, 2001).
196 National Intelligence and Science
avoid the conclusion that the concept nearly three decades later has become a
reality in terms of public perceptions and policy priorities. We have gone from
external and internal threats to national security to a broad risk matrix of soci-
etal risks with varying assumed probabilities and impacts. The cases of intel-
ligence and scientific assessments we have dealt with in this book gained much
of their significance in relation to this emerging awareness of risk management
as one of the key roles of society.
The Nobel laureates at the 2012 round table were, as discussed in the intro-
duction, obviously very much aware of this, and of the credibility gap looming
between the public in their expectations and the experts on the nature and per-
ceptions of uncertainty. The experts could be tempted to oversell assurances,
which happened with the Fukushima disaster, as summarized by medicine
laureate Shinya Yamanaka: all the experts kept saying nuclear energy was safe,
but it turned out this was not the case. The disaster not only struck the nuclear
power plant and the society in the contaminated area, but it also destroyed
faith in scientific competence, impartiality, and ability of the experts to man-
age risks. Scientific expertise time and again faces the awkward dilemma of
balancing expectations and uncertainty, and the disastrous consequences of
loss of trust among the public. Transparency was, in Yamanaka’s opinion, the
answer to the credibility loss caused by the Fukushima meltdown; the experts
had to be as honest and open as possible and deliver their message in words
that could be understood by laypeople. Intelligence veteran Michael Herman’s
earlier quoted commented described the public fallout of the Iraqi WMD
intelligence disaster in a similar way, underlining the element of expert decep-
tion and loss of public trust.
Turning intelligence into science, while certainly useful to some extent and
in some instances, nevertheless misses the point that several of the fundamen-
tal challenges are neither specific to intelligence nor more successfully solved
in the scientific domain. The essence of intelligence is hardly any longer the
collection, analysis, and dissemination of secret information, but rather the
management of uncertainty in areas critical for security goals for societies.
This is also a definition valid for increasingly important fields of both applied
and urgent basic research created by the emerging risk society. We are living
in a social environment transcended by growing security and intelligence
challenges, while at the same time the traditional narrow intelligence concept
is becoming increasingly insufficient for coping with diffuse, complex, and
transforming threats. Looking ahead, the two 20th-century monoliths, the
scientific and the intelligence estates, are simply becoming outdated in their
traditional form. The risk society is closing the divide, though in a way and
from a direction not foreseen by the proponents of turning intelligence analy-
sis into a science.
Beyond the Divide 197
21
Intergovernmental Panel on Climate Change: Working Group 1, “Fifth Assessment Report,
Chapter 1,” pp. 17–19, available at http://www.ipcc.ch/report/ar5/wg1/.
198 National Intelligence and Science
199
200 B i b l i o g ra p h y
Barnaby, Frank, and Ronald Huisken. Arms Uncontrolled. Cambridge, MA: Harvard University
Press, 1975.
Bauer, M. Resistance to New Technology: Nuclear Power, Information Technology and
Biotechnology. Cambridge: Cambridge University Press, 1994.
Beck, Ulrich. Risk Society: Towards a New Modernity. London: Sage, 1992.
Bernard, H. Russell. Handbook of Methods in Cultural Anthropology. Walnut Creek,
CA: AltaMira Press, 1998.
Bernard, H. Russell. Research Methods in Anthropology: Qualitative and Quantitative
Approaches. Walnut Creek, CA: AltaMira Press, 2006.
Bernard, H. Russell, Peter Killworth, David Kronenfeld, and Lee Sailer. “The Problem of
Informant Accuracy: The Validity of Retrospective Data.” Annual Review of Anthropology
13 (1984): 495–517.
Bernard, H. Russell, Pertti J. Pelto, Oswald Werner, James Boster, A. Kimball Romney, Allen
Johnson, Carol R. Ember, and Alice Kasakoff. “The Construction of Primary Data in
Cultural Anthropology.” Current Anthropology 27, no. 4 (1986): 382–396.
Bernheim, Ernst. Lehrbuch der historischen Methode und der Geschichtsphilosophie. Leipzig:
Duncker & Humblot, 1908.
Betts, Richard K. Enemies of Intelligence: Knowledge and Power in American National Security.
New York: Columbia University Press, 2007.
Betts, Richard K. “Two Faces of Intelligence Failure: September 11 and Iraq’s Missing WMD.”
Political Science Quarterly 122, no. 4 (2007): 585–606.
Bill Donaldson et al. “MITRE Corporation: Using Social Technologies to Get Connected.”
Ivey Business Journal (January/February 2011).
Bimber, Bruce. The Politics of Expertise in Congress: The Rise and Fall of the Office of Technology
Assessment. Albany: State University of New York Press, 1996.
Blight, James G., and David A. Welch. Intelligence and the Cuban Missile Crisis. London: Frank
Cass, 1998.
Bohn, Roger E., and James E. Short. How Much Information? 2009: Report on American
Consumers. San Diego: Global Information Industry Center, University of California,
2009.
Boin, Arjen, Paul’t Hart, Eric Stern, and Bengt Sundelius. The Politics of Crisis
Management: Public Leadership under Pressure. Cambridge: Cambridge University Press,
2005.
Buck, Peter. “Adjusting to Military Life: The Social Sciences Go to War 1941–1950.” In Military
Enterprise and Technological Change: Perspectives on the American Experience, edited by
Merritt Roe Smith. Cambridge, MA: MIT Press, 1985.
Bumiller, Elisabeth. “Gates on Leaks, Wiki and Otherwise.” New York Times (November 30,
2010).
Bush, George W. Decision Points. New York: Crown, 2011.
Bush, Vannevar. “Science: The Endless Frontier.” Transactions of the Kansas Academy of Science
(1903–) 48, no. 3 (1945): 231–264.
Butler Commission. Review of Intelligence on Weapons of Mass Destruction. London: Stationery
Office, 2004.
Bynander, Fredrik. The Rise and Fall of the Submarine Threat: Threat Politics and Submarine
Intrusions in Sweden 1980–2002. Uppsala: Uppsala Acta Universitatis Upsaliensis, 2003.
Calhoun, Mark T. “Clausewitz and Jomini: Contrasting Intellectual Frameworks in Military
Theory.” Army History 80 (2011): 22–37.
Carson, Rachel, Silent Spring. Boston: Houghton Mifflin, 1962.
Cassidy, David C. Uncertainty: The Life and Science of Werner Heisenberg. New York: W.
H. Freeman, 1992.
Center for the Study of Intelligence. At Cold War’s End: US Intelligence on the Soviet Union and
Eastern Europe, 1989–1991. Washington, DC: Central Intelligence Agency, 1999.
Christensen, Tom, Per Lægreid, and Lise H. Rykkja. “How to Cope with a Terrorist Attack?—a
Challenge for Political and Administrative Leadership.” In COCOPS Working Paper No 6,
pp. 4, 2012, http://www.cocops.eu/publications/working-papers.
B i b l i o g ra p h y 2 01
Church Committee Report. Foreign and Military Intelligence: Book 1: Final Report of the
Select Committee to Study Governmental Operations with Respect to Intelligence Activities.
Washington: United States Senate, 1976.
Clarke, D. “Archaeology: The Loss of Innocence.” Archaeology 47, no. 185 (1973): 6–18.
Cohen, Maurie J. Risk in the Modern Age: Social Theory, Science and Environmental
Decision-Making. Basingstoke: Palgrave, 2000.
Commission of the European Communities. Directorate-General for Agriculture. European
Community Forest Health Report 1989: Executive Report. Luxembourg: Office for Official
Publications of the European Communities, 1990.
Committee on Behavioral and Social Science Research to Improve Intelligence
Analysis for National Security; National Research Council. Intelligence Analysis for
Tomorrow: Advances from the Behavioral and Social Sciences. Washngton D.C: National
Academies Press, 2011.
Darling, Arthur. “The Birth of Central Intelligence,” Sherman Kent Center for the Study of
Intelligence, www.cia.gov/csi/kent_csi/docs/v10i2a01p_0001.htm.
Davies, Phillip H. J. Intelligence and Government in Britain and the United States, Volume 1: The
Evolution of the U.S. Intelligence Community. Santa Barbara, CA: Praeger, 2012.
Davies, Phillip H. J. “Intelligence Culture and Intelligence Failure in Britain and the United
States.” Cambridge Review of International Affairs 17, no. 3 (2004): 495–520.
Davies, Phillip H. J., and Kristian C. Gustafson. Intelligence Elsewhere: Spies and Espionage
Outside the Anglosphere: Washington, DC: Georgetown University Press, 2013.
Dessants, Betty Abrahamsen. “Ambivalent Allies: OSS’ USSR Division, the State Department,
and the Bureaucracy of Intelligence Analysis, 1941–1945.” Intelligence and National
Security 11, no. 4 (1996): 722–753.
Deutscher, Irwin. “Words and Deeds: Social Science and Social Policy.” Social Problems 13
(1965): 235.
Dewar, James, Carl H. Builder, William M. Hix, and Morlie H. Levin. Assumption-Based
Planning: A Planning Tool for Very Uncertain Times. Santa Monica, CA: RAND
Corporation, 1993.
“Dommen over Gleditsch Og Wilkes. Fire Kritiske Innlegg.” Oslo: Peace Research Institute
of Oslo, 1981.
Donaldson, Bill, et al. “MITRE Corporation: Using Social Technologies to Get Connected,”
Ivey Business Journal, June 13, 2011.
Dover, Robert, and Michael S. Goodman, eds. Spinning Intelligence: Why Intelligence Needs the
Media, Why the Media Needs Intelligence. New York: Columbia University Press, 2009.
Drapeau, Mark, and Linton. Social Software and National Security: An Initial Net Assessment.
Washington, DC: Center for Technology and National Security Policy, National Defense
University, 2009, p. 6.
Drogin, Bob. Curveball: Spies, Lies, and the Con Man Who Caused a War. New York: Random
House, 2007.
Duelfer, Charles A. Comprehensive Report of the Special Advisor to the DCI on Iraq’s WMD.
Washington, September 30, 2004, https://www.cia.gov/library/reports/general-reports-1/
iraq_wmd_2004/
Duelfer, Charles A., and Stephen Benedict Dyson. “Chronic Misperception and International
Conflict: The US-Iraq Experience.” International Security 36, no. 1 (2011): 73–100.
Dulles, Allen. The Craft of Intelligence. New York: Harper and Row, 1963.
Eichengreen, Barry, Donghyun Park, and Kwanho Shin. “When Fast Growing Economies
Slow Down: International Evidence and Implications for China.” National Bureau of
Economic Research R Working Paper 16919(2011), http://www.nber.org/papers/w16919.
Ekman, Stig. Den Militära Underrättelsetjänsten. Fem Kriser under Det Kalla Kriget (The Military
Intelligence. Five Crises during the Cold War). Stockholm: Carlsson, 2000.
“Etern Gjennomgang Av Politiets Sikkerhetstjeneste. Rapport Fra Traavikutvalget.”
Oslo: Justis- Og Beredskapsdepartementet, December 3, 2012.
European Peace Movements and the Future of the Western Alliance. Edited by Walter Laqueur and
R. E. Hunter. New Brunswick, NJ: Transaction Books, 1985.
202 B i b l i o g ra p h y
Feinstein, Alvan R. “The ‘Chagrin Factor’ and Qualitative Decision Analysis.” Archives of
Internal Medicine 145, no. 7 (1985): 1257.
Fingar, Thomas. Reducing Uncertainty. Stanford CA: Stanford University Press, 2011.
Fjaestad, Björn. “Why Journalists Report Science as They Do.” In Journalism, Science and
Society: Science Communication between News and Public Relations, edited by Martin
W. Bauer and Massimiano Bucchi. New York, Routledge, 2007.
Flores, Robert A., and Joe Markowitz. Social Software—Alternative Publication @ NGA.
Harper’s Ferry, VA: Pherson Associates, 2009.
Flynn, Michael. Fixing Intel: A Blueprint for Making Intelligence Relevant in Afghanistan.
Washington: Center for a New American Century, 2010.
Foreign Relations of the United States, 1945–1950: Emergence of the Intelligence Establishment.
Washington, DC: United States Department of State, 1996.
Forskning eller spionasje. Rapport om straffesaken i Oslo Byrett i mai 1981 (Research or Espionage.
Report on the Criminal Trial in Oslo Town Court in May 1981). Oslo: PRIO, 1981.
Frankel, Henry. “The Continental Drift Debate.” In Scientific Controversies: Case Studies in the
Resolution and Closure of Disputes in Science and Technology, edited by Hugo Tristram
Engelhardt and Arthur Leonard Caplan. Cambridge: Cambridge University Press, 1987.
Freedman, Lawrence. “The CIA and the Soviet Threat: The Politicization of Estimates, 1966–
1977.” Intelligence and National Security 12, no. 1 (1997): 122–142.
Friedman, Jeffrey A., and Richard Zeckhauser. “Assessing Uncertainty in Intelligence.”
Intelligence and National Security 27, no. 6 (2012): 824–847.
Garthoff, Raymond L. “U.S. Intelligence in the Cuban Missile Crisis.” In Intelligence and the
Cuban Missile Crisis, edited by James G. Blight and David A. Welch. London: Frank Cass,
1998.
Gates, Robert M. “Guarding against Politicization.” Studies in Intelligence 36, no. 1 (1992): 5–13.
Gazit, Shlomo. “Intelligence Estimates and the Decision-Maker.” In Leaders and Intelligence,
edited by Michael I. Handel. London: Frank Cass, 1989.
George, Roger Zane. “Beyond Analytic Tradecraft.” International Journal of Intelligence and
CounterIntelligence 23, no. 2 (2010): 296–306.
German, Michael, and Jay Stanley. What’s Wrong with Fusion Centers? New York: American
Civil Liberties Union, 2007.
Gibbons, Michael, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and
Martin Trow. The New Production of Knowledge: The Dynamics of Science and Research in
Contemporary Societies. London: Sage, 1994.
Gieryn, Thomas F. “Boundary-Work and the Demarcation of Science from Non-Science: Strains
and Interests in Professional Ideologies of Scientists.” American Sociological Review 48,
no. 6 (1983): 781–95.
Gill, David W. J. “Harry Pirie-Gordon: Historical Research, Journalism and Intelligence
Gathering in Eastern Mediterranean (1908–18).” Intelligence and National Security 21,
no. 6 (December 2006): 1045–1059.
Gill, Peter, Stephen Marrin, and Mark Phythian. Intelligence Theory: Key Questions and Debates.
London: Routledge, 2009.
Gleditsch, Nils Petter. “Freedom of Expression, Freedom of Information, and National
Security: The Case of Norway.” In Secrecy and Liberty: National Security, Freedom of
Expression and Access to Information, edited by Sandra Coliver et al. Haag: Nijhoff, 1999,
pp. 361–388.
Gleditsch, Nils Petter, Sverre Lodgaard, Owen Wilkes, and Ingvar Botnen. Norge i atomstrat-
egien. Atompolitikk, alliansepolitikk, basepolitikk (Norway in the Nuclear Strategy. Nuclear
Policy, Alliance Policy, Base Policy). Oslo: PAX, 1978.
Gleditsch, Nils Petter, and Owen Wilkes. Forskning om etterretning eller etterretning som
forskning. Oslo: Peace Research Institute of Oslo, 1979.
Gleditsch, Nils Petter, and Owen Wilkes. Intelligence Installations in Norway: Their Number,
Location, Function, and Legality. Oslo: International Peace Research Institute, 1979.
Gleditsch, Nils Petter, and Høgetveit, Einar. “Freedom of Information and National Security.
A Comparative Study of Norway and United States.” Journal of Peace Research 2, no. 1
(1984): 17–45.
B i b l i o g ra p h y 203
Landsberger, Henry A. Hawthorne Revisited: Management and the Worker, Its Critics, and
Developments in Human Relations in Industry. Ithaca: New York State School of Industrial
and Labor Relations, 1958.
Laudani, Raffaele, ed. Secret Reports on Nazi Germany: The Frankfurt School Contribution to the
War Efforts. Franz Neumann, Herbert Marcuse, Otto Kirchheimer. Princeton, NJ: Princeton
University Press, 2013.
Laqueur, Walter. “The Question of Judgment: Intelligence and Medicine.” Journal of
Contemporary History 18, no. 4 (1983): 533–548.
Laqueur, Walter. A World of Secrets: The Uses and Limits of Intelligence. London: Weidenfeld
and Nicolson, 1985.
Lefebvre, Stéphane. “A Look at Intelligence Analysis.” International Journal of Intelligence and
CounterIntelligence 17, no. 2 (2004): 231–264.
Lehner, Paul, Avra Michelson, and Leonard Adelman. Measuring the Forecast Accuracy of
Intelligence Products. Washington, DC: The MITRE Corporation, 2010.
Lennon, Michael, and Gary Berg-Cross. “Toward a High Performing Open Government.” The
Public Manager 39, no. 10 (Winter 2010). http://www.astd.org/Publications/Magazines/
The-Public-Manager/Archives/2010/10/Toward-a-High-Performing-Open-Gover
nment?.
“Lessons Learned from Intelligence Successes, 1950–2008 (U).” Kent School Occasional
Paper, Washington, D.C.: Central Intelligence Agency, io, 2010.
Liberatore, Angela. The Management of Uncertainty: Learning from Chernobyl. Amsterdam: Gordon
and Breach, 1999.
Lippmann, Walter. Public Opinion. New York: Harcourt, Brace, 1922.
Litwak, Robert S. “Living with Ambiguity: Nuclear Deals with Iran and North Korea.” Survival
50, no. 1 (2008): 91–118.
Lock, Stephan. Fraud and Misconduct in Medical Research. Edited by Frank Wells. London: BMJ
Publishing Group, 1993.
Lock, Stephan, and Frank Wells. Fraud and Misconduct: In Biomedical Research. Edited by
Michael Farthing. London: BMJ Books, 2001.
Lowenthal, Mark M. “A Disputation on Intelligence Reform and Analysis: My 18 Theses.”
International Journal of Intelligence and CounterIntelligence 26, no. 1 (2013): 31–37.
Lucas, R.E., Jr. "Econometric Policy Evaluation: A Critique." In The Phillips Curve and Labor
Markets, Carnegie-Rochester C Carnegie-Rochester Conference on Public Policy, edited by K.
Brunner and A.H. Meltzer, Vol. 1. Amsterdam: North Holland, 1976, 19–46.
Lundgren, Lars J. Acid Rain on the Agenda: A Picture of a Chain of Events in Sweden, 1966–1968.
Lund: Lund University Press, 1998.
MacEachin, D. J. Predicting the Soviet Invasion of Afghanistan: The Intelligence Community’s
Record. Washington, DC: Center for the Study of Intelligence, Central Intelligence
Agency, 2002.
Marks, J. H. “Interrogational Neuroimaging in Counterterrorism: A No-Brainer or a Human
Rights Hazard.” American Journal of Law & Medicine 33 (2007): 483.
Marquardt-Bigman, Petra. “The Research and Analysis Branch of the Office of Strategic
Services in the Debate of US Policies towards Germany, 1943–46.” Intelligence and
National Security 12, no. 2 (1997): 91–100.
Marrin, Stephen. “Best Analytic Practices from Non-Intelligence Sectors.” edited by Analytics
Institute, 2011.
Marrin, Stephen. Improving Intelligence Analysis: Bridging the Gap between Scholarship and
Practice. London: Routledge, 2011.
Marrin, Stephen. “Preventing Intelligence Failures by Learning from the Past.” International
Journal of Intelligence and CounterIntelligence 17, no. 4 (2004).
Marrin, Stephen. “Intelligence Analysis: Turning a Craft into a Profession.” International
Conference on Intelligence Analysis, Proceedings of the First International Conference on
Intelligence Analysis, McLean, VA: MITRE, May, 2005.
Marrin, Stephen, and Jonathan D. Clemente. “Improving Intelligence Analysis by Looking to
the Medical Profession.” International Journal of Intelligence and CounterIntelligence 18,
no. 4 (2005): 707–729.
206 B i b l i o g ra p h y
Neustadt, Richard E., and Harvey V. Fineberg. “The Swine Flu Affair: Decision-Making on
a Slippery Disease.” In Kennedy School of Government Case C14-80-316. Cambridge,
MA: Harvard University.
Nickerson, Raymond S. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.”
Review of General Psychology 2, no. 2 (1998): 175–220.
Nomination of Robert M. Gates, Hearings before the Select Committee on Intelligence of the United
States Senate. Washington, DC: US Government Printing Office, 1992, 510–511.
Northern Securities Co. v. United States, 193 U.S. 197, 400–411 (1904).
Nye, Joseph S. Jr. “Peering into the Future.” Foreign Affairs (July/August 1994): 82–93.
O’Reilly, Jessica, Naomi Oreskes, and Michael Oppenheimer. “The Rapid Disintegration of
Projections: The West Antarctic Ice Sheet and the Intergovernmental Panel on Climate
Change.” Social Studies of Science 42, no. 5 (2012): 709–731.
Olcott, Anthony. “Institutions and Information: The Challenge of the Six Vs.” Washington,
DC: Institute for the Study of Diplomacy, Georgetown University., April 2010.
Olcott, Anthony. “Revisiting the Legacy: Sherman Kent, Willmoore Kendall, and George
Pettee—Strategic Intelligence in the Digital Age.” Studies in Intelligence 53, no. 2
(2009): 21–32.
Olcott, Anthony. “Stop Collecting—Start Searching (Learning the Lessons of Competitor
Intelligence).” Unpublished paper.
Palm, Thede. Några Studier Till T-Kontorets Historia. Vol. 21, Kungl. Samgfundet for Utgivande
Av Handskrifter Rorande, Stockholm: Kungl. Samfundet för utgivande av handskrifter
rörande Skandinaviens historia, Vol. 21, 1999.
Pascovich, Eyal. “Intelligence Assessment Regarding Social Developments: The Israeli
Experience.” International Journal of Intelligence and CounterIntelligence 26, no. 1
(2013): 84–114.
Patrick Kelly. “The Methodology of Ernst Bernheim and Legal History.” http://www.scribd.
com/doc/20021802/The-Methodology-of-Ern . . .
Patton, M. Q. “Use as a Criterion of Quality in Evaluation.” In Visions of Quality: How Evaluators
Define, Understand and Represent Program Quality: Advances in Program Evaluation,
edited by A. Benson, C. Lloyd, and D. M. Hinn. Kidlington, UK: Elsevier Science, 2001.
Persson, Gudrun. Fusion Centres—Lessons Learned. A Study of Coordination Functions for
Intelligence and Security Services (Stockholm: Center for Asymmetric Threat Studies,
Swedish National Defence College, 2013).
Peterson, Steven W. US Intelligence Support to Decision Making. Weatherhead Center for
International Affairs, Harvard University, July 1, 2009.
Pettee, George The Future of American Secret Intelligence. Washington: Infantry Journal
Press, 1946.
Phythian, Mark. “Policing Uncertainty: Intelligence, Security and Risk.” Intelligence and
National Security 27, no. 2 (2012): 187–205.
Platt, W. Strategic Intelligence Production: Basic Principles. New York: F. A. Praeger, 1957.
Popper, Karl Raimund. Conjectures and Refutations: The Growth of Scientific Knowledge.
London: Routledge and Kegan Paul, 1969.
Popper, Karl Raimund. The Open Society and Its Enemies, Vol.1: The Spell of Plato.
London: Routledge and Kegan Paul, 1945.
Popper, Karl Raimund. Open Society and Its Enemies, Vol. 2: The High Tide of Prophecy; Hegel,
Marx and the Aftermath. London: Routledge and Kegan Paul, 1947.
Popper, Karl Raimund. The Poverty of Historicism. London: Routledge and Kegan Paul, 1960.
Prados, John. The Soviet Estimate: U.S. Intelligence Analysis & Russian Military Strength.
New York: Dial Press, 1982.
Pritchard, Matthew C., and Michael S. Goodman. “Intelligence: The Loss of Innocence.”
International Journal of Intelligence and CounterIntelligence 22, no. 1 (2008): 147–164.
“Probing the Implications of Changing the Outputs of Intelligence: A Report of the 2011
Analyst-IC Associate Teams Program.” In Studies in Intelligence 56, no. 1, Extracts,
2012, 1–11.
Protest and Survive. Edited by Dan Smith and Edward Palmer Thompson. Harmondsworth:
Penguin, 1980.
208 B i b l i o g ra p h y
Wilkes, Owen, and Gleditsch, Nils Petter. “Research on Intelligence and Intelligence
as Research.” In Elements of World Instability: Armaments, Communication, Food,
International Division of Labor, Proceedings of the Eighth International Peace Research
Association Conference, edited by Egbert Jahn and Yoshikazu Sakamoto. Frankfurt/
New York: Campus, 1981.
Willis, Henry H., Genevieve Lester, and Gregory F. Treverton. “Information Sharing for
Infrastructure Risk Management: Barriers and Solutions.” Intelligence and National
Security 24, no. 3 (2009): 339–365.
Wohlstetter, Roberta. Pearl Harbor: Warning and Decision. Palo Alto, CA: Stanford University
Press, 1962.
Wolf, Charles, K. C. Yeh, Benjamin Zycher, Nicholas Eberstadt, and Sungho Lee. Fault Lines
in China’s Economic Terrain. MR-1686-NA/SRF ed. Santa Monica, CA: RAND, 2003.
Zehr, Stephen. “Comparative Boundary Work: US Acid Rain and Global Climate Change
Policy Deliberations.” Science and Public Policy 32, no. 6 (2005): 445–456.
Zolling, Hermann, and Heinz Höhne. Pullach Intern: General Gehlen Und Die Geschichte Des
Bundesnachrichtendienstes. Hamburg: Hoffmann und Campe, 1971.
Zuckerman, Solly. Beyond the Ivory Tower: The Frontiers of Public and Private Science.
New York: Taplinger, 1971.
IN DEX
Figures and tables are indicated by f and t following the page number.
Abandonment: of intelligence disputes, 64; of Bernard, H. Russell, 92, 93, 94, 95, 97n74
scientific disputes, 60 Bernheim, Ernst, 41
ACH (Analysis of competing hypotheses), 37 Betts, Richard K., 63, 170, 171, 172, 173, 195
Acid rain, 67–72, 73n53 Beyond the Ivory Tower (Zuckerman), 175
ACLU (American Civil Liberties Union), 121 "Big data," 189–190
Activity-based intelligence (ABI), 187–192 Bimber, Bruce, 91
Activity patterns, 104 bin Laden, Osama, 111, 139–140, 188
Afghanistan: intelligence collection in, 42–43; Biographic analysis, 91
Soviet invasion of, 111 Biological weapons, 170. See also Iraqi WMD
After-action analysis, 172. See also Postmortem estimates
analyses Blair, Tony, 178, 181
Agrell, Wilhelm, 41–42, 81, 169n23 Blindness for "blue," 167
Ahmadinejad, Mahmoud, 147 Blitzkrieg, 165
Allison, Graham T., 27, 28 Blogs, 134, 151
Al-Midhar, Khalid, 137 "Blue" actions and analysis, 110, 167
Alternative analyses, 107 Boundary work, 67, 68, 74n57, 78, 79
American Civil Liberties Union (ACLU), 121 Brain-computer interfaces (BCI), 108
Analysis. See Intelligence analysis; Policy analysis Brain fingerprinting, 108
Analysis of competing hypotheses (ACH), 37 Breivik, Anders Behring, 183–184, 185n4
Analytic self-deterrence, 165–169 Britain: CCTVs in, 132; on German military
Andrus, Calvin, 156 growth, 163–165; intelligence history in,
Antarctic ice sheet, 74n58 17; and Iraqi WMD estimates, 169–173,
Anthropology, 10, 91–98 176n48; Iraqi WMD estimates by, 166;
Arab-Israeli war (1973), 47, 63, 78 NSA's partnership with, 152
Arab Spring (2011), 6, 34, 190 Brown, Harold, 48, 188
Archaeology, 10, 93, 104–106 Bundesnachrichtendienst (Germany), 160n4
A-Space, 133, 150, 152 Bureaucratization, 15, 16n16
Association charts, 90 Bush, George W., 144, 147, 148
Astronomy, 87 Bush, Vannevar, 26
Augmented Cognition, 108 Butler Report, 170n28, 178, 178n55
Australia, NSA's partnership with, 152
Autonomy, 159, 168, 173 Campbell, Alastair, 173, 177, 181
Canada: NSA's partnership with, 152; SARS
Badawi, Zeinab, 1–2 in, 114
Bateson, Gregory, 93 Carson, Rachel, 67
Bayesian logic, 141n7 Carter, Jimmy, 46
BCI (Brain-computer interfaces), 108 Castro, Fidel, 137
Beck, Ulrich, 25, 195 Catchwords, 180, 181
213
214 Index