Usaid Disinformation Primer
Usaid Disinformation Primer
ACKNOWLEDGEMENTS
There are many people to thank in terms of their contributions to this primer. It was conceived and
developed by Joshua Machleder and Shannon Maguire at USAID, together with the NORC at the
University of Chicago team including Susan Abbott, Renée Hendley, and Luis Camacho.
We thank the following individuals for sharing their time, opinions, and expertise: Deepanjalie
Abeywardana (Verite Research, Sri Lanka); Akintunde Akanni (Lagos State University), Daniel Arnaudo
(National Democratic Institute); Manisha Aryal (Chemonics); Rafiq Copeland (Internews); Marius
Dragomir (Central European University, Center for Media, Data and Society); Dejan Georgievski (Media
Development Centre Skopje); Dean Jackson (National Endowment for Democracy); Rasto Kuzel (Memo
98, Slovakia); Shanthi Kalathil (National Endowment for Democracy); Gillian McCormack (Internews);
Michael Mirny (IREX); Sarah Oates (University of Maryland); Igor Rozkladaj (Center for Democracy and
Rule of Law, Kyiv); Bruce Sherman (USIP Peace Tech Lab); Juni Soehardjo (media lawyer, Indonesia);
Tara Susman-Pena (IREX); Emeka Umejei (University of Ghana, Accra); Herman Wasserman (University
of Capetown); Nancy Watzman (FirstDraft News); Bob Wekesa (University the Witwatersrand); and
Tim Weninger (University of Notre Dame).
We also want to thank the United States Department of State Global Engagement Center for their
review and feedback. We appreciate the feedback and advice offered by Nicholas Glavin, Mary-Beth
Polley, and Phillip Tolentino.
In addition, numerous current and former staff at USAID contributed to the development of the primer.
We greatly appreciate the opinions and feedback from the following: Mariam Afrasiabi, Kora Andrieu,
Matthew Baker, Keti Bakradze, Jared Ford, Maher M. Frijat, Andrew Greer, Adam Kaplan, Lauren Kirby,
Nadereh Lee, Taly Lind, Amy Malessa, Laura McKechnie, Michael McNulty, Kyle Novak, Diana Parzik,
Marko Pjevic, Lisa Poggiali, Joseph Scheibel, Gloria Steele, Samantha Turner, Sara Werth, Thomas
White, and Johanna Wilkie.
Photo placement
These insights are -supplemented
Place Sectionwith
Photo Here (incase
resources, frontstudies,
of text)and examples to illustrate different
dimensions of the problem and to enable readers to pursue deeper discussions and resources that can
help their programs and projects. The primer and its many annexes can be used as a guide or reference,
and its modular design can supplement training programs aimed at different aspects of the
disinformation conundrum.
Photo: ©2018 Unsplash/Abhijith S. Nair
Experts from academia, government, civil society, and media agree that disinformation is a problem with
social, political, and economic ramifications. A study done by Prevency, a German international
consulting company for reputational risk and crisis management, found that disinformation costs the
global economy $78 billion per year, including in share price loss, brand reputation management, and
investment in political disinformation campaigns. 2
USAID staff and partners around the world need a working knowledge of the scope and form of
disinformation since it impacts many levels of programming and interventions across all development
sectors. While it is daunting to define terms, this primer provides key terminology and tools to better
identify ways to counter and prevent it.
Disinformation comes from both homegrown and foreign sources. Foreign Policy noted in a recent article
that “as research has increasingly shown, homegrown disinformation is making democracy sicker than
any foreign efforts can.”3 The article goes on to point out:
“There are immense incentives for disinformation built into democratic institutions themselves.
Treating disinformation as an alien disease ignores the fact that it is perfectly compatible with
democratic norms and thrives inside democratic states. A recent report 4 by the Oxford Internet
Institute, for example, found that politicians inside 45 democracies have used social media for
‘amassing fake followers or spreading manipulated media to garner voter support.’ 5”
Disinformation is a core challenge for democracy, rights, and governance promotion, yet it is not the
only problem. Other key information challenges include censorship and freedom of expression; internet
freedom and digital rights (including throttling and internet shutdowns); political polarization; and the
demise of traditional journalism business models and related new challenges of the financial viability of
the news industry in the digital age. Each of these challenges creates fertile ground for, or amplifies
disinformation by, limiting the free and open access to facts, data, and information in our societies.
As the spread of disinformation online has grown rapidly in recent years, global internet freedom has
been declining rapidly (for the ninth year in a row in 2019). Since 2016, Freedom House has reported on
new governments contributing to the spread of disinformation. Freedom House also has observed new
malicious actors taking advantage of the failure of democratic states to successfully regulate online
campaign finance and transparency rules that are essential for democratic elections. This trend is
worrisome. Many more democratic leaders are employing this strategy domestically. In this way,
democratic governments are also falling prey to state-sponsored disinformation because they cannot use
more draconian methods to exercise power.
Additionally, repressive governments have gained access to new tools to collect and track data on entire
population sets and are utilizing them to effectively increase popular support for themselves. They use
social media surveillance tools and artificial intelligence to “identify perceived threats and silence
undesirable expression.” This trend has created an environment where civil rights are being abused and
activists are being repressed and denied the possibilities of the digital sphere for a variety of religious,
Most worryingly, disinformation is a significant force that can undermine democracy and good
governance, free and fair elections, access to information, rule of law, protection of human rights,
independent media, and civil society action. Critical to every aspect of good governance, information
integrity enables political parties to debate and share ideas, concerns, and solutions. It opens
opportunities for citizens to influence public policy dialogue. It promotes economic innovation as
entrepreneurs refine and improve on the goods we produce. And it enables governments to respond
effectively to public health and other emergencies.
Democratic societies rely on journalists, media outlets, and bloggers to help shape local and national
dialogue, shine a light on corruption, and provide truthful, accurate information that can inform people
and help them make the decisions needed to live and thrive. Yet, the public sphere has taken on an
increasingly toxic and polarized quality. The nature of how people access information is changing along
with the information technology boom and the decline of traditional print media. Because traditional
information systems are failing, some opinion leaders are casting doubt on media, which, in turn, impacts
USAID programming and funding choices.
Our technology-enabled society, with all the vaunted benefits of increased connection, efficiency, and
transparency, has also led to erosion of individual privacy, trolling, cyberbullying, cyber or information
warfare, and dubious and deceitful abuse and misuse of tech platforms in deliberate efforts to erode
democracy and public trust and extort money and power. As numerous scholars have argued, tech
platforms have preferred profit over their users, failing to provide even basic controls to help support
civic engagement over extremist speech. Indeed, studies such as Siva Vaidhyanathan’s Antisocial Media:
How Facebook Disconnects Us and Undermines Democracy demonstrate that social media platforms find
extremism far more engaging—and hence more profitable—so their platform design encourages it.10
Today’s digital communications and media landscape is complex and has given rise to a new set of
challenges and considerations for democracy support. Across all USAID programming countries, this
requires a solid understanding of disinformation and robust approaches for countering and preventing it.
2. It leads to a loss of information integrity. Online news platforms have disrupted the traditional
media landscape. Government officials and journalists are not the sole information gatekeepers
anymore. As such, citizens require a new level of information or media literacy to evaluate the
veracity of claims made on the internet. False beliefs spread across the internet because almost
anything is being promoted by someone. Authoritarian leaders add to the loss of information
integrity by delegitimizing the media, claiming news sources are faulty or corrupt, i.e., the
weaponization of “fake news.” The loss of information integrity itself further undermines trust in the
media’s ability to provide fact-based information. It leads to a loss of press freedom. The
weaponization of “fake news” (calling a partisan or otherwise opinion-heavy article or outlet “fake
news” in order to discredit it) has also led some governments to propose or pass anti “fake news”
bills, which have had a chilling effect on freedom of speech and are used to target or silence
independent media.
3. It can distort political and civic engagement. Social media platforms offer democratic benefits
by connecting citizens with each other in ways more easily possible in a digital space, encouraging
voter turnout, and giving voice to minority viewpoints. However, in conjunction with disinformation,
the same platforms can provide the means for suppression of civic and political engagement. The use
of trolls, doxing, flooding, and other tactics have resulted in a dramatic reduction in constructive
social and political engagement. Simple widespread mistrust about the accuracy and authenticity of
online information may be enough to demotivate political engagement.14
This term offers an alternative to the term “fake news,” which has been coined and promoted for
political purposes. As noted by The Conversation, “Not only do different people have opposing views
about the meaning of “fake news,” in practice the term undermines the intellectual values of democracy
and there is a real possibility that it means nothing. We would be better off if we stopped using it.” 18
Furthermore, as noted by assistant director-general of communication and information at UNESCO
Frank La Rue:
“Fake news is a bad term primarily because it is a trap. It is not news. Just the term generates
mistrust of the press and of the work of journalists. Political leaders have started using the term
against the press, which is especially serious. This is a crucial moment when we have to defend
journalism. We have to promote a journalism of honesty, a journalism that is seen to build the
truth.” 19
Information disorder. A condition in which truth and facts coexist in a milieu of misinformation and
disinformation—conspiracy theories, lies, propaganda, and half-truths. In fact, Groundviews identified 10
types of misinformation and disinformation. (See Figure 2. First Draft News also prepared illustrative
examples of each type; see Annex 2: Types of Misinformation & Disinformation, to study them.)
The threats of information disorder have worsened as social media and internet use become more
ubiquitous and as digital technology writ large has taken on a bigger role in democracy and governance
programming. It is a central area of study to understand how and why there has been such an expansive
erosion of democracy over the past 10 years 20—since 2010, the number of “Free” nations in
Both wealthy and developing countries have struggled to adapt to the large amounts and variety of
misinformation and disinformation circling on the internet. However, in developing countries, the results
can be both life-threatening, as well as detrimental to democratic governance. In extreme cases,
misinformation and disinformation has led to violence against ethnic minorities and impacted the
outcome of elections.
Annex 3: Emerging Solutions, provides links to some of the key research centers working on
information disorder that regularly put out guides, toolkits, newsletters, and webinars. This research
focuses on helping differentiate between the real and the fake online. These are useful for those working
to debunk false news and promote factual, truthful information.
Rumor: Societies have struggled with the misinformation-spreading effects of rumors for centuries, if
not millennia—what is perhaps less obvious is that even works of fiction can give rise to lasting
misconceptions of the facts.
Vested Interests: Corporate interests have a long and well-documented history of seeking to influence
public debate by promulgating incorrect information. At least on some recent occasions, such
systematic campaigns have also been directed against corporate interests, by nongovernmental
interest groups.
Media Fragmentation: Though the media are, by definition, seeking to inform the public, it is notable
that they are particularly prone to spreading misinformation for systemic reasons that are worthy of
analysis and exposure. The internet and the growing use of social networks have fostered the quick
and wide dissemination of misinformation. The fractionation of the information landscape by new
media is an important contributor to misinformation’s particular resilience to correction. 25
Additional reasons information disorder is on the rise include:
In the political sphere, the ability to win elections is now correlated with a political actors' capacity to
manage social media platform messaging. For example, Ukraine’s staggering landslide election of both
President Volodymyr Zelenskyy and his Servant of the Figure 3: Russian Produced Meme to
People Party in 2019—sweeping away 80 percent of Persuade Ukrainians against Euro-
all Members of Parliament (MP)—was based on almost Integration
no concrete policy formulations. Zelenskyy built a
formidable campaign machine based on social media
and a fictional characterization he embodied (as
Ukraine’s president in a comedic television series) that
his opponents could not match.
– In 2019, the average time spent on the internet was 10 hours in the Philippines, 9.3 hours in
Brazil, 8.25 hours in South Africa, 7.5 hours in the U.A.E., 6.3 hours in the United States, and 5.5
hours in China. 33,34
The growing dominance of social media as a source of information is happening all over the world:
– On Twitter, a total average of 500 million tweets are posted daily; on Facebook there were 1.63
billion daily active users in September 2019.35
– India alone is home to 290 million Facebook users. To put this into perspective, if India’s
Facebook users were a country, its population would rank fourth in the world. 36
– More than 70 percent of internet users in Kenya, South Africa, Bulgaria, Chile, Greece, and
Argentina get their news from social media.37
The pervasiveness of Facebook’s Free Basics Internet.org—which provides a pared-down cell phone internet
experience providing access to mainly social media—has affected internet usage. Social media is becoming
synonymous with the internet:
– In many countries, such as Sri Lanka and the Philippines, “opening the internet” on a digital
device means opening Facebook. In 2014, Quartz found that 65 percent of Nigerians and 61
percent of Indonesians surveyed agreed with the statement: “Facebook is the internet.”38
– The Philippines, for example, is a good illustration of how Facebook has penetrated into the
social structure: “Free Basics was launched in the Philippines in 2013. By 2018, almost two-
thirds of the country’s 110 million people were using Facebook, according to Buzzfeed. In the
Philippines, the word ‘Facebook’ is interchangeable with ‘internet,’” writes Maria Farrell. 39
Though the platforms may change, the problems that social media brings with it remain the same. WhatsApp,
now owned by Facebook, is also making significant headway globally. WhatsApp remains a widely used platform
outside of the United States for information sharing. However, WhatsApp’s encrypted, and non-public nature
makes it difficult to research and analyze.
– According to a survey conducted by Reuters in 2017, WhatsApp has become one of the leading
news sources in Brazil, Malaysia, and Spain, nearing 50 percent of the population who say they
use it for their main news source on a regular basis. 40
– According to Digital Information World, 41 WhatsApp has 1.5 billion users from 180 countries,
which makes it the most popular instant messaging app worldwide. (Facebook Messenger is in
second place with 1.3 billion users.)
– WhatsApp has one billion daily active users. The biggest market for WhatsApp in India with
over 200 million users; Brazil has 120 million users.
WHATSPP (133)
FACEBOOK MESSENGER (75)
VIBER (10)
IMO (3)
LINE (3)
TELEGRAM (3)
WECHAT (3)
GOOGLE MESSAGES (1)
HANGOUTS (1)
KAKAOTALK (1)
ZALO (1)
UNKNOWN (11)
Source: Based on similarweb’s algorithum integrating current installs from the google play store with active app users (December 2018).
Note: Figures in parentheses in the legend represent the number of countries/territories in which each platform is the top-ranked messenger app.
Source: https://datareportal.com/reports/digital-2019-global-digital-overview
SOURCES: POPULATION: UNITED NATIONS; LOCAL GOVERNMENT BODIES; MOBILE: GSMA INTELLIGENCE; INTERNET: ITU; GLOBALWEBINDEX; GSMA
INTELLIGENCE; LOCAL TELECOMS REGULATORY AUTHORITIES AND GOVERNMENT BODIES; APJII; KEPIOU ANALYSIS; SOCIAL MEDIA: PLATFORMS’ DELF-SERVICE
ADVERTISING TOOLS; COMPANY ANNOUNCEMENTS AND EARNINGS REPORTS. CAFEBAZAAR. KEPIOS ANALYSIS. ALL LATEST AVAILABLE DATA IN JANUARY 2020.
COMPARABILITY ADVISORY: SOURCE AND BASE CHANGES.
Source: https://wearesocial.com/blog/2020/01/digital-2020-3-8-billion-people-use-social-media
AGE
One of the key questions that needs to be
considered in terms of countering and
preventing disinformation is whether there are
generational differences in terms of news
consumption, susceptibility to false news and
information, and access to and ability to use
technology. For instance, around the world,
younger users with a higher education level and
higher income are more likely to have
smartphone access than their elders. 42
However, statistics show that older people have
rapidly caught up and, when they cannot, they
often rely on younger users to access the
information they need. Moreover, some studies
have shown that people aged 65 and older are
almost four times more likely to share false
news on social media than younger people and
that in some instances they are more
responsible for the spread of
disinformation. 43Social science research is increasingly interested in the question of whether the
consumption of false news is a matter of generational differences. One study found that age plays a key
role and has a strong effect on the dissemination of false news. According to the study (Guess, et al.),
“on average, users over 65 shared nearly seven times as many articles from fake news domains as the
youngest age group.” 44
In a U.S. study conducted by College Reaction, 69 percent of Gen Z (born mid-1997 to early 2012)
students claimed it is somewhat or very easy to discern between true and false information online. 45
However, a majority of middle schoolers in the same generation could not determine the difference
between an advertisement and a news story, while 30 percent of the surveyed students found a fake
news story to be more credible than a real story. 46 The U.S. experience, however, differs drastically
from youth in Finland where children in primary and secondary school are taught about media literacy as
a core part of their education, and beneficiaries from Finland’s whole of society approach to the
disinformation problem. Finland’s government launched an “anti-fake news” initiative in 2014 “aimed at
teaching residents, students, journalists and politicians how to counter false information designed to sow
division.” 47
Some studies have shown elders, who may have less facility with digital technology, to be more immune
to disinformation because they rely on other forms (books, education, experience) to assess the validity
GENDER
In the Global South, women and men often experience digital technology very differently; however, they
use it almost equally in both advanced and emerging economies.50 Where resources are thin, families
often do not have the time or money for women to have access to the internet. Moreover, it is often
true that women often have less access to the internet because of local gender norms.
Figure 8: Gender Dynamics and Online Violence
#JournalistsToo
© International Center for Journalists (ICFJ)
Source: https://www.icfj.org/news/online-attacks-women-journalists-leading-real-world-violence-new-research-shows
Furthermore, in the spread of disinformation, gender has often been exploited by autocrats and those in
power to discredit journalists and eliminate government critics. 51 Female journalists across the Middle
East have been repeatedly targeted with doctored photos of them in sexually compromising positions,
claims that they achieved their jobs by being sexually promiscuous, and online defamation campaigns that
utilize misogynistic language to discredit them and their news coverage. 52 For more information on this,
especially on how women politicians are disproportionately affected by false news, see the Council on
Foreign Relations report on Gendered Disinformation, Fake News, and Women in Politics. 53
Stories like the one below are examples of an overall climate that contributes to the discrediting of
female journalists, violence against them, and the danger of covering women’s issues in general. A
Reporters Without Borders report on violence toward journalists covering women’s issues found that of all
forms of violence and retaliation against these journalists about 40 percent is cyber harassment
specifically. 54 It is also worth noting that online violence often accompanies physical violence. 55
The definition of hate speech is often contested, particularly because it is such a charged topic, and
legal and social organizations offer alternative definitions for the same act. This topic is particularly
controversial because there tends to be fine lines drawn in democratic societies regarding what is
considered acceptable free speech and what is not. Some definitions consider hate speech a verbal
attack made on a group based on a shared identity while others agree that an attack on an individual can
be considered hate speech. 58 Likewise, some definitions insist that specific identity markers must be
included in hate speech (such as membership in an ethnic or social grouping) while others address any
attack on identity. 59
PeaceTech Lab defines hate speech as a deliberate attack or expression that vilifies, humiliates, and
discriminates against others based on their ethnic, national origin, religious, racial, gender, sexual
orientation, age, disability, or other shared
identity. This can lead to a larger societal impact Figure 9: The Dangerous Speech Five-
influencing acts of violence. 60 Hate speech is part Framework
rooted in larger social grievances that are
potentially incendiary and often lead to serious
violence and injustice and new research indicates
hate incidents online and offline peaking in
tandem. 61
Understanding how to recognize hate speech and dangerous speech is particularly important to
combatting their spread online through platforms like Facebook and Twitter, where hate speech and
disinformation are often tightly linked.
It is worth noting the concerns about hate speech in the context of conflict (pre or post). A noted
example is the Radio des Milles Collines in Rwanda, which called for direct attacks against the Tutsi
minority and for which the journalists of that radio were called before the International Court of Justice
in The Hague and charged with crimes against humanity. Hate speech, particularly in fragile states
marked by conflict, can lead to violence and other harm so it is essential to understand the challenges
for labeling hate speech as such.
While the Kremlin’s disinformation campaigns may appear focused on creating chaos for the United
States and other Western countries, the reality is that it hopes to utilize disinformation to weaken
perceived adversaries in order to achieve strategic goals, including restoring Russia to great power
status, preserving its sphere of influence, protecting the Putin regime, and enhancing its military
effectiveness. 64
The Kremlin’s Disinformation Methods
Kremlin-supported propaganda, disinformation, and information manipulation primarily relies on its
advanced network to spread easy-to-understand messages that exemplify a clear narrative of the United
States as the aggressor and Russia as the only country brave enough to stand up to U.S. hegemony. 65
Russia has been utilizing strategic disinformation tactics since the 1950s to try to influence the
perceptions of people worldwide. The Russian government has for years included disinformation and
misinformation as part of “active measures,” or covert influence operations. 66 In fact, the word
“disinformation” is derived from the Russian term dezinformatsiya (дезинформация). 67 Highly
coordinated disinformation campaigns are meant to influence the perceptions and actions of others and
make it highly difficult to discern between the real and the phony. The goal of these campaigns is to
1. Look for cracks and social divisions within the target society.
2. Create a big lie.
3. Wrap the lie around a piece of truth.
4. Conceal your hand (make the story seem like it came from
somewhere else).
5. Find a useful idiot (who will take the message and push it to
foreign audience).
6. Deny everything.
7. Play the long game, resulting in a major political impact years
from now. 69
Since 2015, the Kremlin has begun expanding its media influence by creating media cooperation
agreements with over 50 local media organizations around the world. 70 The messages shared through
these networks play on strong emotions and tend to do very well on social media, where users have
been shown to interact with content that is emotional in nature. 71 Thus, in countries where both the
United States and Russia have been working to develop influence, the Kremlin tends to put forth
narratives that are easy to understand, play to the emotions, and disingenuously offer a clear good guy–
bad guy paradigm.72 Comparatively, the United States has often struggled to offer effective fact-based
alternatives to such narratives. This is particularly relevant in countries where USAID works to promote
democratic governance.
A key example that merits consideration is the massive disinformation campaign lodged against the
White Helmets, known as the Syria Civil Defense, during the Syrian civil war. 73 Russia in particular used
propaganda and other disinformation tactics to sow seeds of doubt about the work of the White
Helmets and to undermine their humanitarian mission. As The Guardian reported, “The aim was to flood
the media ecosystem with falsehoods that would
erode trust among supporters, especially donor Factory of Lies: The Russian Playbook
governments.” 74 Notably, the challenge of countering
Kremlin influence is also felt in Ukraine, Moldova, and Factory of Lies: The Russian Playbook is an
Georgia (and other post-Soviet states)—countries NBC explainer on how Cold War tactics
have continued to be used by Russia as a
that are looking to orient their cultures, politics, and
way of subverting the media.
economies away from Russia, but face the challenges
of ongoing Russian state efforts to exacerbate Available at: https://youtu.be/hZrfZU-uZqU
polarization and conflict. Strategies for these areas,
The internet and social media have given the Russian government an immediacy and reach that it never
had previously to continue spreading disinformation and lies while simultaneously slowly tearing away at
the fabric of democracy.
In August 2020, the Department of State’s Global Engagement Center published a report discussing how
Russia utilizes a variety of tactics and channels to create and amplify disinformation and propaganda.75
Russia’s disinformation and propaganda ecosystem is a collection of official, proxy, and unattributed
communication channels and platforms consisting of five main pillars: official government
communications, state-funded global messaging, cultivation of proxy sources, weaponization of social
media, and cyber-enabled disinformation. The Kremlin’s willingness to employ this approach provides it
with perceived advantages:
It allows for the introduction of numerous variations of the same false narratives. This allows for the
different pillars of the ecosystem to fine-tune their disinformation narratives to suit different target
audiences because there is no need for consistency, as there would be with attributed government
communications.
It provides plausible deniability for Kremlin officials when proxy sites peddle blatant and dangerous
disinformation, allowing them to deflect criticism while still introducing pernicious information.
It creates a media multiplier effect among the different pillars of the ecosystem that boost their
reach and resonance.
The Four Ds Approach
The Government of Russia’s strategy for dealing with negative reporting on its actions revolves around
four tactics:
This strategy allows the Kremlin to maintain control over the information being spread by virtue of
discrediting the individual or organization sharing the information, distorting information to fit their
purpose and to support state interests, distracting from the situation at hand where it may be at fault,
and launching accusations elsewhere and dismaying the audience by warning that moves that negate state
interests will have disastrous consequences for those planning them. These strategies—along with a
CHANGING TACTICS
There are increasing indications that Beijing is taking a more aggressive approach to information
manipulation similar to Moscow. The COVID-19 pandemic has demonstrated that Beijing is increasingly
promoting disinformation, pushed out by state media, its officials, and CCP-affiliated social media
accounts, bots, and trolls. Beijing also undertook concentrated efforts to push conflicting theories about
the pandemic which were intended to sow doubt, deflect blame, and create the idea that the PRC is
superior to the United States in responding to international health crises like COVID-19. Observers saw
an increasing confluence or convergence of Kremlin, CCP, and Iranian regime false narratives regarding
the pandemic. 80 These three adversaries’ state information ecosystems have often converged to spread
anti-U.S. disinformation, especially to include spurious claims that the United States caused or
exacerbated the COVID-19 pandemic. This convergence appears to be a result of “opportunity,” not
intentional coordination, but all three actors are more routinely leveraging the information tools of the
others in their campaigns. Also, the Kremlin and the CCP share a common agenda in discrediting
democracy and advancing non-democratic governance systems (especially as being more effective in
responding to the pandemic).
Beijing also has the advantage of platforms like TikTok and WeChat (Chinese-originated applications)
that are increasingly used all over the world, which have been used as tools to control and suppress
information, especially as it relates to CCP priorities. In particular, the Chinese government blocks
information on WeChat and even removes information from private chats. 83 In this way, Beijing controls
the narrative about the PRC and maintains a positive image as an alternative to a democratic society.
While Beijing has seemingly chosen to focus on the Chinese diaspora, Chinese mainland, Taiwan, and
Hong Kong as places to utilize their influencing strategy, the CCP is working to perfect this strategy in
order to utilize it in other contexts abroad to influence public opinion.84
Across sub-Saharan Africa, the Beijing-based StarTimes television operator provides a popular digital
television service, including CCP state propaganda, in the cheapest package available but does not offer
alternative international media outlets. 85 StarTimes Vice Chairman Guo Ziqi has stated that their aim is
“to enable every African household to afford digital TV, watch good digital TV, and enjoy the digital life.”
Ultimately, however, this highlights China’s strategy of showcasing a positive image toward the world
and providing services to developing countries at scale, albeit through the lens of the CCP. Beijing
presents itself as a developing country among equals in the developing world and encourages those
countries to replicate the CCP’s authoritarian governance if they want to pursue economic development
without democratization. 86 This view is explicitly intended to offer an alternative to U.S. international
leadership and democratic governance. It is also a central tenet of Beijing’s media strategy.
The CCP’s investment in media bureaus overseas, content-sharing agreements, and the distribution of
content in multiple languages clearly exemplifies Beijing’s strategy to influence positive attitudes toward
China globally. A 2015 Reuters investigation found that CCP-funded programming was available in 14
different countries. 87 By 2018, a Guardian report revealed that the number had grown considerably to 58
stations in 35 countries. 88
Disinformation thrives best in digital spaces when dissemination agents can construct an effective illusion
that changes the behaviors of many authentic users in ways that verify, elevate, and amplify false
narratives. Behavior change is sought around things like voting behavior, physical confrontations, conflict,
geopolitical orientation, and disruption of democratic deliberation. To better understand the way new
technology is used to manipulate social media users and disinform the public, it is imperative to
understand some key terms in this field and that this is an evolving space and new types of manipulation
are likely to appear.
While there are a multitude of actors working to spread disinformation around the world, research and
evidence supports that state actors, and specifically political candidates and national leaders, are
increasingly utilizing social media platforms to spread disinformation about their opponents, manipulate
voters, and shape elections. Although in the past, negative campaigning has been utilized in close races
and against opponents, the difference now can be seen in the use of artificial intelligence, sophisticated
data analytics, and political trolls and bots. Information pollution is increasingly used as a tool to
encourage skepticism and distrust and polarize voting constituencies and undermine the democratic
process. 89
The spread of disinformation can occur through manual channels that require manpower, automation,
or a combination of both. The Oxford Internet Institute (OII) published a 2019 report that finds
“growing evidence of computational propaganda around the world.” 90 OII’s Propaganda Research
Project (COMPROP) investigates the interaction of algorithms, automation, and politics. Their work
includes analysis of how tools like social media bots are used to manipulate public opinion by amplifying
Bots: Social media accounts that are operated entirely by computer programs and are designed to
generate posts and/or engage with content on a particular platform.
Clickbait: Something (such as a headline) designed to make readers want to click on a hyperlink
especially when the link leads to content of dubious value or interest. 91 This tactic involves creating a
misleading or inaccurate post using a provocative headline or image that lures the victim to click and
read the content, which is often unrelated or less sensational than the headline itself.
Content Farm: A website or company that creates low-quality content aimed at improving its search
engine rankings. Also known as a content mill or factory, its main purpose is to maximize pageviews
and revenue generated by advertising on those pages while minimizing the costs and time needed to
create the content. 92
Cyber Troops: Government or political party actors tasked with the use of social media to
manipulate public opinion online. 93
Manufactured Amplification: Occurs when the reach or spread of information is boosted through
artificial means. 95
Sock Puppets: A sock puppet is an online account that uses a false identity designed specifically to
deceive. Sock puppets are used on social platforms to spread or amplify false information to a mass
audience. 97
Trolling: The act of deliberately posting offensive or inflammatory content to an online community
with the intent of provoking readers or disrupting conversation. The term “troll” is most often used to
refer to any person harassing or insulting others online. 98
Gaining a better understanding of the “who” is imperative to all facets of re-establishing information
integrity within an information ecosystem. Knowledge of these profiles, their motives, and favored
modes of digital intervention should inform both the information diagnostic process and tactical
solutions to suppress the circulation of false news.
An algorithm, as a sequence of instructions telling a computer what to do, is usually built to collect
information, recognize patterns, or reach people with particular profiles. Where they can find out how
the algorithm works, disinformation agents can craft dissemination strategies to piggyback on a
platform’s algorithm. In other words, players can game the algorithm to gain access to more digital real
estate than would otherwise be possible through simple manpower.
While platform algorithms present users with a litany of navigational and engagement options, they are
generally built to elevate and amplify engagement to generate profits. Social media platforms amass
Coordinated inauthentic behavior 103 (CIB) is a term coined by Facebook in 2018 to describe the
operation of running fake accounts and pages on an online social platform to influence discourse among
users. Facebook releases monthly reports that track CIB on its platform and the actions it has taken to
address it. As a violation of its community standards, Facebook will remove posts deemed to be taking
part in CIB. 104 For the purpose of this primer, we will be discussing coordinated inauthentic behavior in
the context of political actors.
Advanced technologies, such as artificial intelligence, search engine optimization, and bots have allowed
for disinformation agents to manipulate information and influence the public sphere using coordinated
inauthentic activity. They contribute to a significantly less free internet, where certain voices are
amplified above others because of the resources at their disposal and the malicious practices they utilize.
Some have called this tactic “censorship by noise,” in which artificially amplified narratives and campaigns
drown out legitimate dissent.
Disinformation agents have a variety of advantages when utilizing the internet and social media platforms
to share content. Strategies used to mask the originator include the placement, layering, and integration
of media messages; this makes it difficult for fact-checking organizations to trace the source of the
disinformation. Disinformation agents often obscure their efforts in order to dust off their fingerprints
and in doing so work through proxies. While finding the origin of the information is possible, it requires
a level of investigative journalism to track it down beyond the capacity of most users and journalists.
Additionally, tech companies’ advertising models, which are primarily focused on maximizing profit,
contribute to the spread. Finally, the organic information ecosystem also makes it easier for information
to spread, and the message to be amplified. In sum, the cumulative effective of disinformation has varying
degrees of impact, ranging from little to no effect to causing severe harm.
The “breakout scale” (Figure 10) provides useful indicators of the growth of disinformation across
online platforms. Each of its six categories demonstrates a growing impact as content travels across
multiple platforms (even including offline media and policy debates) and whether it will remain isolated
to a few communities or spread through many communities and become a larger or even global
phenomenon. 106 The importance of the work by Nimmo and others seeking to map, measure, and
understand the disinformation effect on society is that there’s an urgent need to put in place
preventative measures, including undertaking research studies and engaging in ongoing media monitoring
in order to be prepared for the amplification of disinformation that can lead to chaos or worse.
Figure 10: Ben Nimmo’s, Breakout Scale: Measuring the Impact of Influence
Operations
Source: https://www.brookings.edu/research/the-breakout-scale-measuring-the-impact-of-influence-operations/
In a few decades, the online media industry has grown from a new frontier with privileged access for
tech-savvy groups and individuals to one that supports the most profitable and growing industries. The
internet provides a unique advantage for malign actors to spread false content and let it organically
amplify as it spreads across platforms. Some conspiracy theories and false claims originate on niche
platforms such as conspiracy forums on 4chan or Discord or gaming platforms like Twitch. Platforms
such as these enable users to coordinate to grow followers and spread content to large social media
sites such as Facebook and Twitter. 108
In this way, platforms that cater to very niche and small audiences have significant influence. A post
shared from one of these original sites can cross to intermediate sites such as Twitter, direct-messaging
groups, and Reddit, where they gain traction and are then be amplified further on massive social media
platforms such as YouTube or Facebook. Journalists, politicians, and influencers find the content and
push it on to even larger audiences. This content now becomes part of the public record and credible
news outlets often feel obliged to cover or debunk it, providing it with even more traction. In this way,
disinformation goes viral, self-propagating itself from fringe groups to major news outlets. As
disinformation takes over, it is an enormous challenge to halt its momentum or inoculate people against
it.
The rise of big data analytics, “black box” algorithms (an invisible process through which machines learn
about social patterns), and computational propaganda (use of algorithms, automation, and human
curation to purposefully distribute misleading information over social media networks) have raised
concerns from policymakers. The social media business model that includes advertising and data sales
promotes controversial posts and information (even though Facebook and YouTube have worked on
addressing this lately). These promotion tools have been used in many countries to expand the reach of
divisive social media campaigns, to intensify political conflict, and to weaken public trust in the media,
democratic institutions, and electoral outcomes. The threats to democracy are further intensified by
micro-targeting of messages to specific users through sophisticated and proprietary algorithms.
“In the new age of lies, law, not tech, is the answer. From India to Indonesia to Brazil,
democracy is being compromised by online domestic disinformation campaigns from political
parties seeking to gain an advantage. Democratic institutions have not been able to keep up and
have instead deferred to tech firms, trusting them to referee online behavior. But this is a task
far beyond the limited capabilities and narrow motivations of companies such as Facebook and
Twitter. If the democratic recession is to end, democratic institutions need to create new rules
and hold the responsible parties to account. 109 “
It is also important to recognize the role that chat applications such as Facebook Messenger, WhatsApp,
Signal, and Telegram play in the spread of online disinformation. Particularly in countries in Africa and
the Middle East and the Global South as a whole, chat applications are an important medium being used
to disseminate key political information, activity coordination, and a platform for sharing news and
current events. Additionally, many of these chat applications in the Global South host large groups in
Chat applications are closed - metrics and data about how they work are not accessible. They are
therefore hard for researchers to study and difficult for companies to moderate. The applications use
end-to-end encryption and offer users security and privacy. Growing in usage globally, these applications
have the potential to spread messages to rather large groups. Because of this, some have been limiting
the size of groups with whom you can share to address large-scale dissemination of mis/disinformation.
Another response to slow the viral spread of mis/disinformation on closed messaging platforms has been
to limit the number of times a user can share messages. More on WhatsApp and closed messaging
systems is found above in the section on Context and Social Factors of Disinformation (page 13).
D. TROLL FARMS AND SOCK PUPPETS: CASE STUDY FROM THE PHILIPPINES
A case study that illustrates many of these tactics is the current state of information disorder in the
Philippines, widely considered to be patient zero of the disinformation tactics that are now used
globally. 111 Because Facebook is the internet in the Philippines, content farms, troll farms, and sock
puppets have been effective in both the spread of disinformation and the suppression of opposition
voices.
Troll Farm: A clear example of a troll farm is Filipino President Rodrigo Duterte’s propagated use of
trolls to target and defame political opponent and vocal critic Leila de Lima. After pornographic images
went viral in 2016, in which de Lima was falsely attributed as the woman pictured, troll farms
coordinated by the Duterte election campaign pushed the false narrative within Facebook
communities. 112 Trolls used the false content to shame her, attribute her to other scandals, and attack
de Lima’s character in efforts to delegitimize her as a
viable political candidate in the then-upcoming
election. 113 Though the video was ultimately exposed as
false, de Lima’s reputation was stained, and she was
arrested on drug charges, which she denies, six months
later. 114 Since her detainment, critics have pointed to
trolls’ spreading of conspiracy theories and
misinformation on Facebook to helping lead to her
arrest, as well as to distort the public’s understanding of
the national issue of drugs and further damage the
country’s democratic processes. 115
A useful—although imprecise—distinction about the drivers of disinformation is that they can be passive
or active. Passive drivers are largely subconscious; they require no conscious motivation for individuals
to seek out and consume false claims. For example, a person may share information without even
reading if it comes from a trusted family member or friend (this is called “familiarity effect;” see Annex
4: Passive & Active Drivers of Disinformation). On the other hand, active drivers are informed by an
individual’s thought processes and efforts to understand ideas and reach conclusions through cognitive
processes. 118 In this way, a person may believe information that confirms or conforms to a deeply held
conviction (confirmation bias). 119
Some passive and active drivers of disinformation and the reasons disinformation can be psychologically
difficult to discern are described below.
Passive Drivers of Disinformation
In general, people are great passive consumers of information that is passed on to them. This tendency
is amplified online and can result in many individuals reading and reacting to often emotionally
provocative content. Coordinated inauthentic actors rely on this emotional content, to reinforce or
encourage people to act. In evaluating what factors lead to accepting information without taking the time
to critically engage with it, Woolley and Joseff provide a useful list of cognitive drivers (see Annex 4,
Passive and Active Drivers of Disinformation, for more detailed definitions of passive drivers of
disinformation) that make it difficult for an individual to discern between truth and falsity and, in turn,
make it easier to manipulate them into believing false content.
People tend to overestimate the depth of their knowledge regarding topics they care about. This
provides them with an illusion of truth or explanatory depth in the information presented to which they
are exposed 120 and may reinforce their beliefs. This “belief perseverance” may be helpful to explain why
individuals often remain staunch in their beliefs even after reading contradictory claims. When individuals
are asked to think critically about their beliefs and shown information that contradicts them, they often
maintain that their beliefs are still correct. Such perseverance of beliefs even when they have been
debunked by fact-checking has been thought to create a “backfire effect” in which beliefs are reinforced
by the attempt to debunk them. However, some current research suggests that debunking efforts may
work well if the facts are relatively unambiguous. 121
Active Drivers of Disinformation
Active drivers are distinguished by the conscious pursuit of fact claims that serve the purpose of the
information consumer. Woolley and Joseff’s list illustrates some of the reasons that drive people to seek
out false information. (See Annex 4, Passive and Active Drivers for Disinformation, for more detailed
definitions of passive drivers of disinformation.)
Even if an individual knows that information is false, she or he may be driven to believe it anyway. This
often relates to societal pressures that may influence which beliefs individuals publicly adhere to.
An individual might think it is important to believe even dangerous ideas if they are important to a group
in which she or he belongs (“bandwagon effect”). Similar to this phenomenon is “consensus bias,” in
which an individual may believe false claims because of the perception that everybody else believes them
and “in-group favoritism” in which specific group identities are at stake.
Often, active consumption of disinformation may require a person to ignore contradictory beliefs or
knowledge of facts. This effect, “preference falsification,” occurs when individuals suppress their true
opinion in favor of societal pressure to support another preference. 123
The rapid spread of and demand for disinformation may be attributed to laziness or lack of capacity to
exercise critical thinking when consuming information. Gordon Pennycook and David Rand,
psychologists who have studied and research psychological demand for disinformation, examined the
extent to which participants were able to think critically about disinformation in a series of cognitive
tests. They showed participants true and false headlines taken from social media from diverse sides of
the political spectrum. The results of the study showed that participants who on the first test had shown
Anyone with a keyboard can produce disinformation (the supply side) in the form of posts on social
media or on the profusion of “news” sites catering to every taste. Disinformation is produced by
governments, companies, and individuals to purposefully manipulate public perception and political
events.
On social media, fact-checking services are still-developing rules and algorithms are starting to scrutinize
this information supply. But even while it is possible to take down or remove the worst posts, for every
post taken down multiple mutations of the original disinformation will take their place. Twitter received
27,500 legal requests to remove tweets from July to December 2019 from 98,000 accounts due to
suspicious activity. 125 However, this cut-off of supply can have a strangling effect on free expression: as
Twitter notes, 193 accounts subject to legal action by governments were of verified journalists and news
outlets, and much of the removed information continues to circulate in some form of retweets.
Outrage over “fake news” is overwhelmingly targeted at the creators and distributors of misinformation.
That is understandable since they are responsible for releasing half-truths and outright falsehoods into
the wild. However, there are indubitably as many drivers or reasons for producing disinformation as
there are human interests—albeit political destabilization, ideology, hate, illegal activity, theft, and other
criminal activity are age-old motivators. Many observers stress that the supply of disinformation will
always be an issue, but we must really focus on the demand side as much as possible. 126
When we have evidence of disinformation messages and hate speech and believe they are spreading
rapidly, how do we monitor and find out through appropriate research how and how broadly they are
circulating? Digital forensics and network analyses as well as traditional media monitoring have emerged
as some of the best approaches to track the flow of disinformation. Exposing disinformation is often the
first step in countering it.
Social Network Analysis
Social network analysis is a useful research method both as a diagnostic tool and as a means to develop
a strategy for countering and preventing disinformation. Exposing disinformation is really the first thing
you need to do. It is the first intervention that must happen and this research can provide an evidence
base to inform program design. The analysis of social networks and their role in diffusing new ideas or
deepening belief in existing ideas has been advancing rapidly in the last decades. Detailing types of
relationships, gaps, and strongly interconnected communities help to understand both the capacity of
false information to propagate and the difficulty in correcting it in isolated communities.
Social network “diffusion models” developed originally for epidemiological purposes to track the spread
of disease are used regularly to provide graphic maps and algorithms tracking the spread of
disinformation. The diffusion model in Figure 11 (below) shows how a network of social media,
individuals, and domains enabled the spread of a Kremlin-orchestrated disinformation campaign in
Ukrainian elections. 131 Diffusion models have been particularly helpful in understanding the reach and
impact of social media platforms such as Facebook and Twitter to trace the sources of information
through social media posts and reposts. Social network analysis is a useful research method both as a
diagnostic tool and to develop a strategy for countering and preventing disinformation.
Figure 11: Valerij Zaborovskij’s Diffusion Model
Figure 12: Full SNA on Disinformation Figure 13: Zoomed in SNA for Papua,
in West Papua Showing Central Nodes
Another social network analysis, conducted by Graphika, linked inauthentic coordinated behavior to
influence the 2020 elections in Myanmar to the military who displaced civilian leadership in a coup in
January 2021. The analysis tracked numbers of posts, numbers of shared posts, and even the time of day
in which posts were shared to reveal the coordination.134 Facebook removed over 70 accounts.
According to Graphika’s report that details the social network analysis research:
As it announced the takedown, Facebook said, “We identified clusters of connected activity that
relied on a combination of fake, duplicate and authentic accounts to post content, evade
enforcement and removal, manage Pages and drive people to off-platform websites including
military-controlled media domains. These accounts often used stock female profile photos and
photos of celebrities and social-media influencers. … We began our investigation after reviewing
local public reporting about some elements of this activity. Although the people behind this
activity attempted to conceal their identities and coordination, our investigation found links to
members of the Myanmar military.” 135
Digital Forensics
Digital forensics take a deep look at the data about posts (e.g., number of shared posts). Two of the
most-followed fake accounts removed by Facebook this year were focused on Philippine news,
Source: DFRLab
Figure 14 uses digital metrics to quantify the spread of disinformation about coronavirus by South
African fringe parties, showing how significant a role Facebook played. Forensics such as these are very
helpful in identifying both where and how people are engaged in the digital media ecosystem.137
The Atlantic Council’s Digital Forensics Research Lab (DFRLab) publishes research along these lines.
Looking at how Facebook identified and removed disinformation posts associated with the United Russia
party, the research considers the wide scope of
data that can be gathered from likes to followers Data Analytics for Social Media Monitoring
and the types of online activity carried out by
Data Analytics for Social Media Monitoring : NDI
suspicious accounts. A look at how these digital Guidance on Social Media Monitoring and Analysis
networks are connected provides us with a map Techniques, Tools and Methodologies is a guide to
that can inform policy that would target the most help researchers, election observers,
central players, as represented by the larger technologists and others understand the best
circles, representing domains, social media practices, tools, and methodologies for
accounts, implementing partner addresses, developing online observation and monitoring
personas, and Google analytics IDs. Based on this for social media networks. It presents an
network map, for example, one might expect introduction to the relevant concepts when
studying these issues, as well as a review of how
that a counter-campaign would target and watch
to build a complete picture of the socio-technical
for information passing by these means.
context in a country or region, including the
Facebook, for example, removed 40 specific user local parties’ online presence, social media and
accounts, 17 pages, one group, and six Instagram internet penetration rates, local media, ethnic
accounts for coordinated inauthentic behavior.138 and religious divisions, and a host of other
Digital forensics (often available as open-source factors that manifest in the online space.
software) provide available data on the backend
Available at:
of internet use—unique users, page clicks, and https://www.ndi.org/sites/default/files/NDI_Social%20Media
visits, for example—as critical clues about the %20Monitoring%20Guide%20ADJUSTED%20COVER.pdf
spread and origins of mis/disinformation. The
approach can help supply data needed for Social Network Analysis (SNA for short) in the digital realm
and quantitative information on the reach of websites or other platforms that are needed to inform
counter disinformation programming.
For more information, see the Media Monitoring section of Annex 5: Section-by-Section Resources.
Open-Source Intelligence (OSINT) is another strategy utilized for identifying and debunking
disinformation. This refers to the multi-method approach of collecting and analyzing free, publicly
available information and cross referencing it against other public sources. 142 Publicly available
information often includes material from satellite images, social media posts, YouTube videos, and online
databases, among other sources. 143 OSINT is noted for its accessibility as a free tool that anyone can use.
For more examples, see the OSINT section of Annex 6: Section-by-Section Resources.
Looking forward, combatting disinformation will remain a serious challenge for societies around the
world and a danger to democratic governance. As technology continues to outpace the development of
solutions, disinformation tactics will adapt and innovate to remain effective. These inevitable cycles of
new disinformation techniques and solutions that provide temporary patches are evolving and becoming
more sophisticated in the global competition over the control of information.
Future action should concentrate more on critical research and the expansion of knowledge on
technology innovation, programming, and information systems. There remains ample opportunity to
explore and develop more tech-based tools and approaches. However, to best address disinformation,
action and research cannot be left to technology experts alone: the general public and civil society
organizations need to gain a basic understanding and grasp of digital and information literacy.
The following sections present a sample of trending disinformation tactics, rising threats, and potential
opportunities. As technology continues to innovate and learn from its previous shortcomings, new
evolutions of tools and tactics present concern for future information disorder. From the expansion of
artificial intelligence capabilities to the exploitation of existing vulnerabilities, anti-disinformation
approaches will face new challenges from varying angles.
Although local newspapers have tried to transition to digital operations, the rise of Big Tech has
inhibited their success. With Facebook and Google sharing 80 percent of the digital ad market, smaller
organizations are left competing amongst themselves for the remainder of the market, limiting the
amount of ad revenue they can generate.146 Steve Cavendish explains that “print dollars that many news
chains have walked away from have been replaced by digital dimes or even digital pennies,” leaving them
to scale back or close. Ultimately, the rise in news deserts may result in more people turning to social
media as their primary sources for news and information. While social media platforms may be widely
accessible, they continue to be channels for disinformation, misinformation, and malinformation to
spread. These platforms are not a replacement for the institution of a democratic, free media.
Consequently, disinformation actors exploit local news deserts, which has led to a new and growing
phenomenon called “pink slime journalism,” 147 a low-cost way of distributing thousands of
algorithmically generated news stories, often with political bias. Designed to look like real, local news
sites, they are in fact low-cost, automated sites that often push partisan agendas. These pink slime sites
capitalize on news deserts left when regional newspapers go broke. While these stories can be factual,
they are not based on investigation and may parrot fake claims made in news releases or from opinion
leaders. Increasingly pink slime operations are funded by political parties in the United States or by
foreign governments (e.g., Iran), highlighting a critical need for transparency.
Pink slime propagators own multiple newsletters and outlets, which enables them to be profitable and
makes the niche media essentially a pay-to-play proposition akin to advertising.148 Because of its
diversion from critical analysis, pink slime journalism is an effective megaphone for disinformation.
While information on alternative systems such as conspiracy theories may seem farcical or preposterous
to an outsider, to users these spaces enable them to collaborate and validate their own claims and
interpretations of the world that differ from “mainstream” sources. 152 With this, individuals contribute
their own “research” to the larger discussion, collectively reviewing and validating each other to create a
populist expertise that justifies, shapes, and supports their alternative beliefs.153 As these discussions
become larger, “mainstream” institutions may pick up on the issue but because they do not understand
the platform or alternative media system more generally, they may unknowingly provide wide coverage
of misleading information.
In Nigeria, for example, after disinformation campaigns rattled the country’s 2019 elections, the Nigerian
senate took up a “social media bill” to criminalize the posting of false content on social media if the
content is deemed to destabilize public trust in the government, attempt to influence elections, or
compromise national security. 154 Critics say the bill will jeopardize digital freedoms of expression while
granting the government sweeping, unchecked authority over the country’s media environment.
Nigeria’s social media bill is almost identical to Singapore’s Protection from Online Falsehoods and
Manipulation Bill (POFMA). The October 2019 legislation established a nine-member committee to
preside over the prohibition of posting politically motivated false statements and the creation of
inauthentic online accounts (bots or sock puppet accounts) within digital platforms. The committee can
charge individuals or entire media outlets if content is deemed false or misleading or if implementation
of the law serves greater public interest. 155
Some governments believe that internet shutdowns or slowdowns are a solution to the problem; they
are not. According to AccessNow, the impact of shutdowns affects journalism and access to
information, education, refugees, healthcare, and business, not to mention violates the fundamental right
to access to the internet as an essential right in the 21st century.
AccessNow’s #KeepItOn campaign reported 216 internet shutdowns worldwide in 2019. These
broadband and mobile network disruptions represent 1,706 total blackout days in 33 countries. 158
National and local governments that implement internet or platform blackouts often justify the action as
a measure of public safety or national security against the social harms of fake news. However,
international free speech and press freedom advocate organizations denounce blackouts as authoritarian
and hazardous during public crises where impediments to current and accurate information is life-
threatening. The issue of internet shutdowns is important to monitor because it opens a pandora’s box
that threatens several areas USAID programming addresses and the use of broadband and mobile
networks is often critical to program outreach. Moreover, the economic impact of internet shutdowns
is a big deal. They cost $2.4 billion between July 1, 2005 and June 30, 2016, according to the Brookings
Institution. 159 And, just a few years leader, the trend worsened. Research firm Top10VPN published a
report that analyzed the economic impact of internet shutdowns throughout the world in 2019. Their
research traced 18,225 hours of internet shutdowns around the world in 2019 and noted that this
carried a total economic loss of $8.05 billion. 160
Source: https://www.accessnow.org/cms/assets/uploads/2020/02/KeepItOn-2019-report-1.pdf.
The governments of Cameroon and Venezuela have also engaged in internet shutdowns and platforms
bans. However, in both countries, the governments have used network access as leverage within
existing political conflicts. In Cameroon, English-speaking regions lived without internet access for 240
days in 2017, amid continued civil unrest. Similarly, in Venezuela, President Nicolás Maduro has used
internet shutdowns frequently over the last seven
years and, in the last year, access to Facebook, Internet Blackout in Sri Lanka
SnapChat, Instagram, Google, Twitter, and YouTube Harmful for Civic Space
vacillated during periods of heavy civilian protest.
Both governments have wielded network access to In Southern Asia, where blackouts have
stifle dissent, expand pluralistic ignorance, and silence become common, the Sri Lanka
government disabled access to multiple
oppositional voices in digital spaces.
platforms (Facebook, WhatsApp,
YouTube, Instagram, SnapChat, and
D. DEEPFAKES AND CHEAP FAKES Viber) after the 2019 Easter bombings,
Deepfakes are videos, images, and audio generated claiming the bans protected citizens from
misleading, unverified, or speculative
using artificial intelligence to synthetically render
content during the national crisis. The
realistic depictions of speech and action.161 Most president also pointed at digital platforms
notably, the technology has been used to manipulate as an enabling space for terrorism and
facial expressions and speech, as well as swap the hate groups, such as the one responsible
faces of individuals into videos. While altered and for the bombings. The ban had a
manipulated content already circulates online, the significant impact on the public in a
development of deepfakes intended to misinform will country where Facebook is often a
“significantly contribute to the continued erosion of primary means of both news information
faith in digital content,” according to Brookings. 162 and communication with loved ones.
Cheap fakes are audio-video manipulations created with cheaper, more accessible software (or none
at all). 163 They may be subtle, such as slowing down the speed at which a video is played making it
appear that the speaker’s speech is slurred or altering the background or an otherwise insignificant
aspect of a picture.
Many Artificial Intelligence tools, including deepfake-specific technologies, have free and open access,
enabling the creation of fake content to expand readily. 164 In the future, cheap fakes are likely to be
uploaded by amateurs with satirical or political motives and influence campaigns of foreign origin or with
advertising goals. However, despite their aim, cheap fakes’ wider proliferation in online spaces will
further blur the authenticity of the digital world. 165
To date, ongoing research and mitigation efforts have concentrated on automated deepfake detection,
developing algorithms to recognize modified media. Academic studies have also uncovered indicators of
deepfakes, including unnatural blinking patterns, distorted facial features, lighting inconsistencies, and
more. 166 However, as deepfake technology continues to rapidly improve, these efforts are likely to be
short lived. More funding and research are needed to support the discovery and development of longer-
term solutions and tools to detect deepfakes.
Wardle and Derakhshan in their disinformation report for the Council of Europe provide
recommendations targeted at technology companies, national governments, media organizations, civil
society, education ministries, and funding bodies, with each broken into its own set of suggestions. 172
(See table below to find the recommendations.)
FACT-CHECKING APPROACHES
Why does fact-checking matter, and is it effective? Fact-checking, just like journalism, is about informing
audiences about the truth. Even if it is a limited audience, or if people disagree, calling out disinformation
attempts to hold people and institutions accountable. Sometimes fact-checking efforts are picked up by
larger outlets and reach wider audiences, but even if they are, publishing fact-checking is only the first
step in an incremental process. When watchdogs expose untruths, this can become a resource for
public action, particularly when mobilized by political campaigns or social movements. They can also
identify trends and help trigger an institutional response by regulators, courts, or legislators.
In the last few years, fact-checking organizations have become more effective and grown tremendously.
The Reporter’s Lab hosts a database of reporting projects that regularly debunk political misinformation
and viral hoaxes. As of 2019, the database counts 210 active fact-checking organizations in 16 countries,
up from 59 sites tallied in 2014. 175
Bringing these organizations together, the International Fact Checking Network provides resources,
monitors trends, and promotes basic standards for fact-checking organizations through its code of
principles. 176 Many of these organizations work in USAID program countries; it is worth reaching out to
them when developing disinformation programming. A few examples of fact-checking organizations are
included in Annex 3: Emerging Solutions.
The Effectiveness of Fact-Checking
The effectiveness in fact-checking is the subject of numerous studies. These studies have often produced
contradictory results. In one key study, researchers from three universities teamed up to analyze the
results of the studies and determine the effectiveness of fact-checking across research studies. The
resulting study helps shed some light on the fact-checking landscape and the ins and outs of what works
in fact checking. It found that:
– Fact-checking with graphical elements is less effective than those without, 177 and simple
messages are more effective when informing the public on false information.178
– Fact-checking an entire statement rather than individual elements is more effective.
– When fact-checking, debunking an idea is more effective when it is refuting ideas in line, as
opposed to, a person’s own ideologies. 179
While it is important to consider the limitations of fact-checking—it is not a panacea for all
disinformation—fact-checking is highly prized by many people and is especially important for media
outlets and others who are in the business of investigating and reporting facts.
The method redirects users through ads who seek to access mis/disinformation online to curated
YouTube videos uploaded by individuals around the world that debunk these posts, videos, or website
messages. The Redirect Method, a method used to target individuals susceptible to ISIS radicalization via
recruiting measures, is being adapted in several countries to combat vaccine hesitancy and hate
speech. 180 The method was developed by a collaboration among private and civil society organizations
and is documented on The Redirect Method website. The collaboration provides 44 steps in the
organization of an online redirect campaign. Other examples of the use of redirection are being
employed by groups around the world.
Other researchers have also pursued the prebunking track, including Dutch researchers who developed
a game called Bad News,183 which helps people spot misinformation and disinformation. According to
the developers, it helps people to talk about the truth and reduces their susceptibility to misinformation.
Based on initial research of users who have played Bad News, it has been an effective approach and led
to improved psychological immunity against online disinformation. Another prebunking game (funded by
the Global Engagement Center) is called Harmony Square. The theory behind this game also draws on
inoculation theory; learn more about this in this Harvard Misinformation Review article
However, when democracies pass new laws to curb the spread of disinformation, authoritarians can
adopt these laws as yet another tool to criminalize free expression. So, there is great risk of making the
situation worse by regulating online speech. A very clear example is Germany's NetzDG law (requiring
social media to take down harmful content) that was later adopted by Russia, Venezuela, and other
countries as a means of silencing opposition.184
The Advisory Network to the Media Freedom Coalition, a group of 17 national, regional, and
international organizations, delivered a statement at the ministerial meeting of the 2020 Global
Conference for Media Freedom and put forward the following guidance to support legal and
policymaking advocacy that seeks to deal with the disinformation problem:
Because disinformation is considered a wicked problem, some have called for new ways to regulate the
free flow of information with co-regulatory models. Some ideas were outlined recently in the
Disinformation as a wicked problem: Why we need co-regulatory frameworks policy-paper from
Brookings Institution. 186
A few examples of resources for legal and policy approaches are included in Annex 3: Emerging
Solutions.
MESSAGING CAMPAIGNS
Messaging campaigns have provided a new front where disinformation can easily spread to large
amounts of people. Messaging campaigns refer to messages that contain false information that pass
through networks like Facebook Messenger, Twitter Direct Message, and WhatsApp private and group
messaging. These large-scale campaigns are difficult to track because of the encryption security available
on most messaging apps. Despite this increased difficulty, there are new top-down and bottom-up
approaches that are being utilized to help stop the prevalence of messaging disinformation campaigns.
As social media has expanded as a primary way for many individuals to receive their news, there has
been a decrease in funding for local journalism and a need for better support.191 This change is largely a
result of advertising revenue that is being redirected towards online media sources. Total global
newspaper advertising revenue may lose about $23.8 billion in annual revenues from 2012 to 2021.
More than 10 percent of this decline, around $3 billion, is an estimated loss of annual revenue for local
news media around the world. 192 Television news is still holding its own in places such as the Philippines;
however, more than 80 percent of Filipinos say they now go online for their news and spend four of
their 10 hours online accessing social media.193
Internews builds lasting change by ensuring people have access to quality, local information. To do
so, Internews works with local partners to grow sustainable organizations and offers capacity-
building programs for media professions, human rights activists, and information entrepreneurs. 194
IREX promotes “vibrant information and media systems.” IREX supports journalism and media
organizations through trainings on reporting, media law, media safety, and digital security. IREX
also provides additional support to consumers via media literacy programs, training citizen
journalists, and diversifying and distributing television content. 195
International Center for Journalists (ICFJ) seeks to build the expertise and storytelling skills of
journalists around the world. ICFJ focuses on five key areas: news innovation, investigative
reporting, global exchange programs, specialty journalism, and diversity promotion. 196
TRANSPARENCY IN JOURNALISM
As the spread of disinformation online grows and even reputable news agencies have mistakenly shared
false information, there is a need for better trust and transparency standards to hold media agencies
accountable. Some very basic standards such as bylined articles, public display of address and registration
information, and disclosure of funding sources and editorial board, for example, can assist readers to
understand the sources and reliability of the information they read. By developing standards for media
agencies, journalism will be held to greater account for what it publishes. Examples of ongoing initiatives
aimed at rebuilding trust and strengthening transparency in journalism are included in Annex 3: Emerging
Solutions.
ADVERTISER OUTREACH
In order to disrupt the funding and financial incentive to disinform, attention has also turned to the
advertising industry, particularly with online advertising. A good example of this is the concerted
response to the discovery of the websites traced to the village of Veles outside of Skopje, Macedonia,
which showed how easy it was to exploit the digital advertising model to flood the news ecosystem with
fabricated content. Wired Magazine profiled the Veles case study in 2017. 199
As most online advertisers are unaware of the disinformation risk of the domains featuring their ads due
to the automated process of ad placement, they inadvertently are funding and amplifying platforms that
disinform. 200 Thus, cutting this financial support found in the ad-tech space would obstruct
disinformation actors from spreading messaging online. Efforts have been made to inform advertisers of
their risks, such as the threat to brand safety by being placed next to objectionable content, through
conducting research and assessments of online media content. Additionally, with this data, organizations
hope to aim to redirect funding to higher-quality news domains, improve regulatory and market
environments, and support innovative and sustainable models for increasing revenues and reach.
A few examples of media literacy, education and training are included in Annex 3: Emerging Solutions.
To build societal resilience against disinformation, the public must be aware of and understand the issue.
Public awareness campaigns serve to inform and educate communities about disinformation, build public
recognition of it, and promote actions or other resources for combating it.
A few examples of civil society advocacy and public awareness campaigns included in Annex 3: Emerging
Solutions.
An example is the Digital Society Project, which seeks to provide data on the intersection of politics and
social media. Topics include online censorship, polarization, misinformation campaigns, coordinated
information operations, and foreign influence in and monitoring of domestic politics. It uses the Varieties
of Democracy (V-Dem) framework, also used by USAID in Journey to Self-Reliance (JSR) metrics, to
assess various digital issues including misinformation.
Elections-focused Programming Efforts (both Supply- and Demand-side Solutions)
To defend the integrity of democratic elections worldwide, reducing disinformation has been central to
elections-focused programming in all these common goals:
– Building the capacity of election management bodies to combat disinformation during electoral
periods.
As social media has changed how audiences consume and spread political information, focus has shifted
to monitoring the transformed information environment before and during election periods. Emphasis
has been placed on citizen election and campaign finance monitoring, focusing on exposing
disinformation on social media and other digital platforms during election campaigns. Election
management bodies (EMBs) tend to have significant power to regulate speech and can set some ground
rules. Media platforms have been compelled by EMBs to be transparent about political advertising during
an election season.
A report produced by the Kofi Annan Commission on Elections and Democracy in the Digital Age puts
forward recommendations to governments and internet platforms to safeguard the legitimacy of
elections. As the report notes, “For the foreseeable future, elections in the democracies of the Global
South will be focal points for networked hate speech, disinformation, external interference, and
domestic manipulation.” 202
Disinformation assessments have also been incorporated within international election observation and
management missions. With this, organizations have offered trainings and guidance for election bodies
and other stakeholders in regulating speech, campaign violations, and the prosecution of violations.
There is also advocacy working to motivate social networking platforms, advertisers, governments, and
other parties to improve the information environment with pertinence to the integrity of elections. In
particular, advocacy aimed to achieve greater transparency in political advertising, close fake accounts
and bots, and de-monetize suppliers of disinformation. An example of this is European Commission’s
“Code of Practice on Disinformation,” a self-regulatory Code of Practice to fight disinformation and
ensure transparent, fair and trustworthy online campaign activities ahead of the 2019 European
elections. Signatories included Facebook, Google, Twitter, Mozilla, Microsoft, and TikTok. 203
In more closed environments, with governments seeking to suppress opposition, trainings for political
candidates, parties, and activists have become valuable resources for learning how to disseminate
messages in adverse and media-saturated communities. An example of this work in more closed
environments is NDI’s “Raising Voices in Closing Spaces,” a workbook offering a step-by-step approach
to strategic communications planning for citizen election observation groups and other civil society
organizations to break through in closing or closed political environments. The guide offers strategies,
tactics, and hypothetical and real-world examples for overcoming challenges in repressive
environments. 204
Though it can be well-intentioned, criminalizing the spread of disinformation in digital spaces presents
even more negative ramifications for democratic governance. In countries where disinformation
legislation has been introduced, creating or sharing digital content and information becomes risky and
punishable. The resulting chilling effect ultimately hampers democratic protections for a free press and
the freedom of expression. Likewise, any whole scale internet or platform shutdowns pose the same
anti-democratic results while also creating information vacuums within communities amid public health
or safety crises. As USAID negotiates the entangled, disinformation landscapes that intersect global and
local programming, programs should be wary both of legislative efforts to limit expression online and
internet/platform blackouts for the dangers they pose to democracy.
Whole-of-Government Approaches
There are very few examples of whole-of-government approaches (integrated activities of government
agencies increasing coherence in policy development on an issue) in the disinformation space.
Policymaking is likely to take time and will need to be significantly informed by the private sector and
civil society. Two examples in Scandinavia are discussed to illustrate this approach.
In Sweden, for example, the whole-of-government approach (a comprehensive approach that involves
government, private and civil society sectors and media) attempts to be as multi-faceted as the issue
itself. Focused on maintaining the integrity of its electoral processes, the Swedish government has
poured resources into shoring up its electoral infrastructure by safeguarding electronic voting machines
and providing new training for civil servants and voting officials. The government, electoral bureaus and
cybersecurity agencies have committed to forming an inter-agency cooperative to combat the issue. 205
Sweden has also focused on bettering public media literacy through enhancing high school curriculums
as a long-term strategy against information disorder. Lastly, five of the country’s most prominent media
news outlets have created a scrupulous and highly visible fact-checking collaboration to maintain the
integrity of Swedish media and spread mutual oversight over multiple media stakeholders. 206 Sweden’s
approach has required weaving together resources to defend the many susceptible ports of entry that
misinformation and disinformation penetrate in media landscapes and societies.
In Finland, response to the global information crisis has included a sweeping media literacy education
effort. In junior high and high schools, Finnish students use a digital literacy toolkit in order to
understand how to implement best practices in their own social media engagement. In the world of
electoral and media professions, Finland has dedicated resources to disinformation trainings and
workshops for civil servants and journalists. 207 This approach seeks use of public pedagogy and a more
robust media literacy secondary curriculum to intervene at every level of societal strata, in hopes that
The EUvsDisinfo is a flagship strategic communications project of the European External Action
Services, East StratCom Taskforce since 2015 to “increase public awareness and understanding of the
Kremlin’s disinformation operations, and to help citizens in Europe and beyond develop resistance to
digital disinformation and media manipulation.” 210 EUvsDisinfo has hands in three major areas to stymie
global disinformation. The project conducts data analytics of media spaces to identify and publicize
disinformation campaigns started by the Kremlin or pro-Kremlin media outlets; the data is archived in an
open-source, dedicated database that tracks disinformation around the world. The project also publishes
summaries, articles, and reports both within the research community and for wider general readership
in order to make a technical and complex issue accessible to the broader global communities it impacts.
Lastly, EUvsDisinfo engages with governments, media outlets, universities, and civil society organizations
by offering training resources to better prepare and problem-solve around issues of information
disorder.
Another example of strategic communications is The Atlantic Council, a “nonpartisan organization that
galvanizes U.S. leadership and engagement in the world, with allies and partners, to shape solutions to
global challenges.” 211 Among the numerous capacities through which Atlantic Council conducts
international work, their recently published Democratic Defense against Disinformation 2.0 provides a
current snapshot of global disinformation campaigns at work, including proposed solutions, progress
made, and criticisms and recommendations made to the U.S. executive and judiciary branches regarding
urgent steps needed in the path toward a healthier information ecosystem.
Legal and Regulatory Measures
While there are examples of legal and regulatory measures intended to address disinformation, the
difficulty is that many are controversial because they also restrict freedom of expression.
To achieve good legal and regulatory approaches to disinformation will require civil society and
independent media organizations to have a significant voice in policymaking through multi-stakeholder
collaboration. Governments face very real tradeoffs in developing regulations on digital rights,
disinformation and dangerous speech, while balancing protections for freedom of speech, transparency,
and privacy. Regime type is an important variable to consider.
Guidance on good practices regarding regulation and policymaking in this area was put forward by the
UN Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-
operation in Europe (OSCE) Representative on Freedom of the Media, the OAS Special Rapporteur on
Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special
Rapporteur on Freedom of Expression and Access to Information, with Article 19 and the Center for
Law and Democracy. These standards can be useful in evaluating legal and policymaking processes and
b. Criminal defamation laws are unduly restrictive and should be abolished. Civil law rules on
liability for false and defamatory statements are legitimate only if defendants are given a full
opportunity and fail to prove the truth of those statements and benefit from other defenses,
such as fair comment.
c. State actors should not make, sponsor, encourage, or further disseminate statements which
they know or reasonably should know to be false (disinformation) or which demonstrate a
reckless disregard for verifiable information (propaganda).
d. State actors should, in accordance with their domestic and international legal obligations and
their public duties, take care to ensure that they disseminate reliable and trustworthy
information, including about matters of public interest, such as the economy, public health,
security, and the environment. 212
In considering how and by whom platforms should be held accountable, discussion extends to platform
governance. Robert Gorwa, a scholar from the University of Oxford, explains platform governance as an
“approach necessitating an understanding of technical systems (platforms) and an appreciation for the
inherently global arena within which these platform companies' function.”217 In essence, the approach
necessitates more knowledge about how platforms govern or influence publics via their platform
practices, policies, and affordances while also acknowledging that their conduct is informed by local,
national, or international forms of governance. 218
Currently, there are three governance “lenses” being utilized for thinking about social media platform
regulation and policy: (1) self-regulation, (2) external governance, and (3) co-governance. The first lens
represents the presently dominant mode of governance, a rather laissez-faire relationship between
governing institutions and social media companies, in which the companies dictate their own platform
improvements and transparency efforts. The second lens encapsulates governments enacting legislation
regulating social media platforms. Lastly, the third lens seeks to provide greater democratic
accountability without severe disruption to the status quo. As noted earlier, the challenge with this
approach is striking a balance between combatting the spread of disinformation and harmful content
while also preserving free expression online. Proponents of this lens have proposed the creation of
specialty organizations, third-party adjudication systems, or ombudsman programs that would act to
address and investigate platform complaints as well as set ethical frameworks and standards, among
other tasks. 219 Some examples are ICANN, a not-for-profit, public benefit organization that regulates
domain names, and the Internet Governance Forum, which includes all stakeholders on governance
questions in open and inclusive fora.
Models for Platform Governance: An essay series on global platforms’ emerging economic and social power,
Centre for International Governance Innovation
Social
Media
Initiative Platform Notes
Fact-Checking Facebook With a three-step approach of “identify, review and act,” Facebook has
Program independent fact-checkers assess and rate the accuracy of stories through
original research. Facebook may act by reducing distribution, including sharing
warnings, notifying previous sharers of misinformation, applying misinformation
labels, and removing incentives for repeat offenders. 220
Oversight Board Facebook This internationally diverse group formed to protect freedom of expression by
making independent decisions on content issues and providing policy advisory
opinions to Facebook. 221 Note: Facebook’s Oversight Board has been the subject
of much criticism, as highlighted in an article in the Harvard Business Review. 222
Expansion of Hate Facebook In response to the rise in anti-Semitism globally, Facebook expanded its hate
Speech Policy to speech policy to include Holocaust denial. Any content that “denies or distorts
Include Holocaust the Holocaust” is now banned. Note: Facebook’s decision to undertake action
Denial on hate speech related to Holocaust denial is also perceived with skepticism and
critique.
New Warning Twitter Twitter added new labels on tweets about COVID-19 that link to a Twitter-
Labels about curated page or external trusted source for additional information on the claims
COVID-19 of the tweet. Tweets conflicting with public health experts would be covered by
Information a warning from Twitter. Misleading information with a severe propensity for
harm would be removed from the site. 223
Birdwatch Twitter Twitter is potentially developing a new product called “Birdwatch,” which will
likely involve crowdsourcing to address disinformation and provide more
context for tweets in the form of notes.
Encouraging Users Twitter Twitter tested a feature on Android that will prompt a message suggesting the
to Read Articles user should read an article before sharing it. 224
Before Sharing
* Regarding Facebook, numerous experts from academia, journalism, civil society, government, and other sectors point out the
outsize problem that the company creates in terms of the spread of disinformation. These critiques make clear that the social
media giant struggles to strike a balance between allowing free expression and enabling the spread of potentially dangerous hate
speech, extremism and disinformation. 225 These concerns should be taken very seriously, as Facebook is the biggest social
network in the world, with more than 2.7 billion users. 226
VII. PART SEVEN: TEN THINGS USAID AND ITS PARTNERS CAN
DO TO COUNTER AND PREVENT DISINFORMATION
As noted in the primer’s introduction, information disorder is a wicked problem. Like other seemingly
insurmountable challenges, there is a need to tackle the problem in a number of ways. Any solutions to
the root causes of information disorder will need to work in concert with other issues, such as better
governance, cleaner elections, stronger accountability—the whole spectrum of DRG issues. Simply put,
disinformation is the big spoiler of all the DRG issues. It’s not just about doing one little thing, however;
it’s about a lot of little things, done in an interactive, adaptive way and over a period of time.
While it may not be possible to eradicate information disorder, doing nothing is not an option.
There are steps that must be taken to help alleviate the problem and ensure that the progress and
successes USAID has made around the world are not jeopardized.
Note: you will want to make your work as context specific as possible and do original, diagnostic
research to determine the best course of action for implementing more effective strategies.
(i) What are likely disinformation themes, aims, and main sources of disinformation?
(ii) Who is most vulnerable to the disinformation being spread?
(iii) What are the common disinformation narratives?
(iv) Who are the main disinformation purveyors?
(v) How does it spread across communities, and who are the most vulnerable?
(i) What is the makeup of mainstream media, including state, public service, and private
media as well as community media and alternative media?
(ii) What is the social media landscape like in your country, and what are the most popular
platforms (i.e., Facebook, Twitter, WhatsApp, etc.)?
(iii) What are the most common ways that different segments of the population consume
information?
(iv) Where do different people get their information? Is it online or through TV, radio, print
or peer-to-peer sources?
(v) How do people access the internet? (mobile/desktop)
(c) What is the media literacy landscape? What are the media literacy levels?
(e) What does current research say about the state and nature of the media and social media,
including levels of press freedom, internet freedom and any threats or vulnerabilities that exist
regarding access to information? 227
2. Carry out actor mapping. Assess the key stakeholders who contribute in positive and negative
ways to the media and information ecosystem in your country.
(b) Are there civil society organizations, associations, or institutions that seek to root out
disinformation, increase public awareness, engage in fact checking, or contribute in a positive
way to countering the disinformation problem?
(c) Who are the allies, and who are the foes? Are there media, civil society, academics, think tanks,
businesses, or government agencies that you can work with?
(d) Are there gaps in the actors (fact-checking or media development organizations, for example),
and could these gaps be addressed by partnering with international CSOs or companies?
3. Support media literacy initiatives. Given the changing nature of mass media and news and
information consumption habits, youth as well as other segments of the population benefit from
opportunities to learn critical thinking skills and to acquire the knowledge to spot disinformation
and other aspects of information disorder. Participants of media literacy programming can also serve
as good boosters or spreaders of critical thinking. However, media literacy, like other aspects of
4. Fund independent media and local public interest journalism. High-quality, independent,
local news is under threat. Endangered by changing business models, big tech’s domination of the
digital advertising market, and pressures stemming from media capture, concerns over press
freedom, and the overall safety of journalists, the future of news will require substantial investment
and partnership with donors as well as experimentation with different models and approaches. As
advocated by the Knight Foundation, one of the best forms of resilience against mis- and
disinformation is journalism. 228
6. Stay up to speed. Disinformation is a growing field and threat, and there is a need to stay current
on new research and tools for identifying disinformation trends and threats and to stay abreast of
the different disinformation narratives that could undermine USAID’s work. While this requires
sustained effort, the cost of inattention is too high if ignored.
7. Support internet governance and digital rights initiatives. Digitalization and the spread of
internet technologies has already led to the next frontier of information access. It has, unfortunately,
also provided another way to prevent citizens from accessing information. Criminals and politicians
are known to muddy the waters and engage in malign activities, often using the internet and the
disinformation playbook to gain money and power. Just as human rights advocates have argued that
internet access is a human right and that there is a fundamental right to access of information, there
is also a right not to be disinformed. The internet freedom and digital rights sector that USAID has
supported and partnered with around the world is a vital source of expertise and collaboration in
the fight against disinformation. Consider potential partnerships with digital security trainers, digital
forensic investigators, digital literacy specialists, internet governance experts, and others who can
work with your local USAID programs as part of democracy and governance programs. For
example, working with the Internews-led Internet Freedom Consortium, a program funded by
USAID, can help to address these issues in countries where USAID works. 230
9. Collaborate and engage with other international partners (civil society, media, social media
platforms, internet governance forums, and other donors). Countering and preventing
disinformation programming is a growing field and one that is likely to be a major component of
programs for some time to come. Given the rising influence of both Chinese- and Russian-backed
disinformation efforts that put USAID’s democracy and governance programs at risk, look for ways
to form synergies with others working to support and strengthen democracy in the country where
you operate. Collaborative partnership will be stronger, and funding can be better leveraged to
support smarter program designs, with shared goals and objectives to help local civil society, human
rights organizations, and independent media firmly establish their presence as part of efforts to
combat the information disorder problem. As part of a push for collaboration, support
multidisciplinary approaches. Some of the best examples in countering and preventing disinformation
involve collaborative approaches that include data scientists, social scientists, and journalists.
10. Measure the impact of your efforts to counter and prevent disinformation. Monitoring
and evaluation (M&E) are vital to helping USAID colleagues and others understand the relevance,
effectiveness, efficiency, and impact of countering disinformation programs. Because there are so
many unknowns in terms of what works and what does not in the field of countering disinformation,
research and learning opportunities are important, and data collected should be shared with both
USAID and its implementing partners. To support robust M&E, partner with local research firms,
institutes or universities. There is a growing number of specialists with the right research skills and
social science and computer science training to support your M&E programs in this space. Key skills
include: social network analysis, content analysis and media monitoring, discourse and narrative
analysis, and social media analysis. Before you begin new programs, make sure to capture baseline
data, and include appropriate funds to also carry out an endline study. The “before and after”
aspects of what you can capture through M&E are essential to program reporting.
Term Definition
Algorithm An algorithm is a fixed series of steps that a computer performs in order to solve a
problem or complete a task. For instance, social media platforms use algorithms to
compile the content that users see. These algorithms are designed to show users
material that they will be interested in, based on each user’s history of engagement
on that platform. (Shorenstein Center, 2018)
Backfire Effect This effect is when beliefs are reinforced in the very attempt to debunk them.
Black box algorithms/ Describes aggressive and illicit strategies used to artificially increase a website’s
Black hat SEO (search position within a search engine’s results: for example, changing the content of a
engine optimization) website after it has been ranked. These practices generally violate the given search
engine’s terms of service as they drive traffic to a website at the expense of the
user’s experience. (First Draft)
Bot Bots are social media accounts that are operated entirely by computer programs and
are designed to generate posts and/or engage with content on a particular platform.
In disinformation campaigns, bots can be used to draw attention to misleading
narratives, to hijack platforms’ trending lists, and to create the illusion of public
discussion and support. (Shorenstein Center, 2018) Also see Sock puppet.
Cheap Fakes An audio or video manipulations created with cheaper, more accessible software (or
none at all). They may be subtle, such as slowing down the speed at which a video is
played, making it appear that the speaker’s speech is slurred, or altering the
background or an otherwise insignificant aspect of a picture.
Clickbait Web content with misleading or sensationalist headlines that entices readers to click
through to the full story, which is generally a disappointment. Clickbait’s goal is
usually to generate page views and advertising revenue. (Hootsuite, 2019)
Content Farm A website or company that creates low-quality content aimed at improving its search
engine rankings. Also known as a content mill or factory, its main purpose is to
maximize page views and revenue generated by advertising on those pages while
minimizing the costs and time needed to create the content. 232
Content Moderation The process by which content is moderated on digital media platforms and users are
warned/blocked, according to public terms of service agreements and platform
“community standards.” (Data & Society, 2019)
Coordinated A term coined by Facebook in 2018 to describe the operation of running fake
Inauthentic Behavior accounts and pages on an online social platform in order to influence discourse
among users.
Cyber Troops Government or political party actors tasked with the use of social media to
manipulate public opinion online. 233
Dangerous Speech Any form of expression (speech, text or imaged) that can increase the risk that its
audience will condone or participate in violence against members of another group.
Deep Fakes Deepfakes are videos, images, and audio generated using artificial intelligence to
synthetically render realistic depictions of speech and action. 235
Digital Literacy The ability to “access, manage, understand, integrate, communicate, evaluate and
create information safely and appropriately through digital devices and networked
technologies for participation in economic and social life. This may include
competencies that are variously referred to as computer literacy, information and
communication technology (ICT) literacy, information literacy, and media literacy.”
Digital literacy includes both hard skills related to the use of hardware or software
and digital soft skills related to the use of digital media and information. (USAID,
2019)
Digital Media Digital media includes the user-generated content and underlying software
applications that make up internet-based communication tools, such as websites,
mobile applications, news aggregators, social media platforms, search engines, and
messaging/chat services. (USAID, 2019)
Digital Security The practice of understanding one’s digital footprint, identifying localized risks to
information systems and taking reasonable steps to protect one’s owned assets from
loss or capture. (USAID, 2019)
Echo Chamber An environment or social space where individuals are interacting with ideas and
beliefs similar to their own. Existing ideas are thus reinforced, and they avoid being
challenged by alternative ideas. 236
Fact Checking Fact-checking (in the context of information disorder) is the process of determining
the truthfulness and accuracy of official, published information such as politicians’
statements and news reports. (Shorenstein Center, 2018)
Filter Bubble A situation that arises by virtue of an algorithmic filter, where internet and social
media users interact with material that supports their own beliefs and ideas, and in
turn algorithms continue to provide the user with similar content. 237
Flooding Flooding is spamming with unsolicited or misguided junk mail, chat, or social media
messages such as advertising, brochures, or fake solicitations.
Hate Speech The use of speech to make direct attacks against an individual or a group of people
based on a series of protected characteristics, such as race, ethnicity, nationality,
religion, sex, sexual orientation, gender identity, and physical or mental ability.
(USAID, 2019)
Inauthentic Actors Individuals or organizations working to mislead others about who they are and what
they are doing. 239
Information Disorder A condition in which truth and facts coexist in a milieu of misinformation and
disinformation—conspiracy theories, lies, propaganda, and half-truths.
Internet Freedom The U.S. Government conceptualizes internet freedom as the online exercise of
human rights and fundamental freedoms regardless of frontiers or medium. The same
rights that people have offline must also be protected online—in particular, freedom
of expression, which is applicable regardless of frontiers and through any media of
one’s choice. (USAID, 2019)
Internet Governance The development and application by governments, the private sector, and civil
society, in their respective roles, of shared principles, norms, rules, decision-making
procedures, and programs that shape the evolution and use of the internet. (United
Nations, 2017)
Machine Learning A set of methods for using computers to recognize patterns in data and make future
predictions based on these patterns. Machine learning can be “supervised” or
“unsupervised,” depending on the level of human oversight. (USAID, 2019) Also see
Artificial intelligence.
Malinformation Deliberate publication of private information for personal or private interest, as well
as the deliberate manipulation of genuine content. Note that these information are
based on reality but are used and disseminated to cause harm. (Wardle &
Derakhshan, 2017)
Manufactured Boosting the reach or spread of information through artificial means. 240
Amplification
Media literacy The ability to methodically consider and reflect on the meaning and source of a post
or news article.
Meme An idea or behavior that spreads from person to person throughout a culture by
propagating rapidly and changing over time. The term is now used most frequently to
describe captioned photos or GIFs that spread online; the most effective are
humorous or critical of society. (Shorenstein Center, 2018)
Message Monitoring Message monitoring analyzes the tropes, narratives, or specific messages that a bad
actor is putting forward. In this way, it monitors platforms to look at the key
messages that extremist or conspiracy groups are putting out to see if there are
specific messages that they are repeating in talking points.
Microtargeting Directing tailored advertisements, political messages, etc. at people based on detailed
information about them (such as what they buy, watch, or respond to on a website);
targeting small groups of people for highly specific advertisements or messages. 241
Misinformation Misinformation is information that is false, but not intended to cause harm. For
example, individuals who do not know a piece of information is false may spread it on
social media in an attempt to be helpful. (Shorenstein Center, 2018)
Natural Language The relationship between computers and human language content. It refers to speech
Processing analysis in both audible speech, as well as text of a language. NLP systems capture
meaning from an input of words (sentences, paragraphs, pages, etc.).
Open-Source The multi-method approach of collecting and analyzing free, publicly available
Intelligence (OSINT) information and cross-referencing it against other public sources. 242
Persona The creation of a representative user based on available data and user interviews.
Though the personal details of the persona may be fictional, the information used to
create the user type is not. Personas can help bring a target audience to life.
(Usability.gov, 2019)
Pink Slime Journalism A low-cost way of distributing thousands of algorithmically generated news stories,
often with political bias.
Platform Related A social platform is a web-based technology that enables the development,
deployment, and management of social media solutions and services. It provides the
ability to create social media websites and services with complete social media
network functionality. (Examples include Facebook, LinkedIn, Twitter, Snapchat,
Pinterest, Instagram, WhatsApp, TikTok, and others.) Some disinformation is
purposefully spread through algorithms that target specific audiences or inauthentic
actors who gain a following in social media. Others are an unintended consequence
of the way we share information or gain followers on social media platforms, such as
the Echo chamber and the Filter bubble.
Propaganda True or false information spread to persuade an audience but often has a political
connotation and is often connected to information produced by governments.
Redirecting Sending a user to a different site or reference that can serve to debunk or offer
context to information presented in social media or on a website.
Satire Satire is writing that uses literary devices such as ridicule and irony to criticize
elements of society. Satire can become misinformation if audiences misinterpret it as
fact. (Shorenstein Center, 2018)
Social Media Listening A means of attaining interpersonal information and social intelligence from social
media to understand how relationships are formed and influence the way we listen to
and communicate with one another. 244
Social Media The process of identifying and determining what is being said about an issue, an
Monitoring individual, or a group through different social and online channels. It is used also by
businesses to protect and enhance the reputation of their brands and products. The
method uses bots to crawl the internet and index messages based on a set of
keywords and phrases. 245
Sock Puppet A sock puppet is an online account that uses a false identity designed specifically to
deceive. Sock puppets are used on social platforms to inflate account’s follower
numbers and to spread or amplify false information to a mass audience. Often
confused with a meat puppet, which is another actual individual using a false identity
for the same purpose. (Shorenstein Center, 2018) Also see Bot.
Troll Farm A troll farm is a group of individuals engaging in trolling or bot-like promotion of
narratives in a coordinated fashion. (Shorenstein Center, 2018)
Web Analytics The measurement, collection, analysis and reporting of internet data for purposes of
understanding and optimizing web usage. (Usability.gov, 2019)
Fabricated content
https://www.vice.com/en_in/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp
Indian politician uses deepfake to show himself giving a speech in a different language.
False connection
https://www.youtube.com/watch?v=QilIhGzhW7o
The title of the video plays on anti-Chinese sentiment that is prevalent in Latin America to get people to
click on the video and share. A textbook example of False Connection.
False context
https://twitter.com/AFPFactCheck/status/1221732075885101061
Video allegedly from Wuhan province where coronavirus originated is really from Indonesia.
Imposter content
https://twitter.com/elisethoma5/status/1215604086780743680?s=20
Fake screenshot that shows Newsweek article about Iran air strikes. Shows side-by-side comparisons of
real/fake screens.
Satire or parody
https://factcheck.afp.com/australian-couple-quarantined-onboard-diamond-princess-cruise-reveal-wine-
drone-delivery-story-was
The “news” about an Australian couple on cruise ship ordering wine via drone was debunked. The
couple admitted that it was a joke post on Facebook for their friends.
Manipulated content
https://twitter.com/jamescracknell/status/1254395457033379843?s=21
Daily Mail edited a photo to make two people in a garden to appear closer than they really are.
Misleading content
https://www.bbc.com/news/blogs-trending-51020564
BBC Trending investigates cases of disinformation on Australian bushfires maps on social media.
Africa Check is a nonprofit organization established in 2012 to promote accuracy in public debate
and the media in Africa. Devised by the nonprofit media development arm of the international news
agency AFP, Africa Check is an independent organization with offices in Johannesburg, Nairobi, Lagos,
and Dakar. Africa Check produces reports in English and French, testing claims made by public
figures, institutions and the media against the best available evidence. They have fact-checked more
than 1,500 claims on topics from crime and race in South Africa to population numbers in Nigeria and
fake health cures in various African countries.
Chequeado is the main project of the La Voz Pública Foundation. They are a nonpartisan and
nonprofit digital medium dedicated to the verification of public discourse, the fight against
disinformation, the promotion of access to information and the opening of data. Chequeado has been
online since October 2010 and was the first site in Latin America dedicated to speech verification; it
is among the top 10 fact-checking organizations in the world.
iRasKRIKavanje (Serbia) is a leading source for debunking disinformation in Serbia. With more than
half a million monthly readers, it has been fact-checking media coverage of the COVID-19 pandemic,
as well as covering the government’s response to the crisis. Its posts have been regularly republished
by the leading Serbian daily newspapers and portals.
PopUp Newsroom (global—some work has been in Mexico and Sweden) is a concept focused on
collaboration between competing newsrooms in order to drive innovation, especially as it relates to
the online disinformation sphere. 246 In Mexico, the organization helped launch Verificado, with more
than 90 partners working to address disinformation around the 2018 elections.
TruthBuzz (global—with fellows and partners in Brazil, India, Indonesia, Nigeria, and U.S.) is an
initiative sponsored by the International Center for Journalists to help reporters utilize compelling
storytelling methods that improve the reach and impact of fact-checking and help audiences learn to
identify the true from the false. 247 TruthBuzz fellows work together with a newsroom and receive
training from First Draft News. One fellow’s project supported Tempo.co, an Indonesian
investigative reporting outlet, to reach a younger audience and begin introducing fact-checking
approaches. 248
Ukraine has developed some significant fact-checking initiatives. VoxUkraine uses a scientific analysis
method to assess major economic and political processes and decisions in Ukraine. Among others, a
key project by the Vox team is VoxCheck, a fact-checking service that identifies disinformation
narratives being spread online. TEXTY.org is a Ukrainian nonprofit data analysis group that fact-
checks and combats disinformation
ISFED’s Fact-a-lyzer was created by the International Society for Fair Elections and Democracy, which
was established for the 2018 presidential election in Georgia. Fact-a-lyzer is a pilot social media
monitoring tool used for the election. The tool’s software aggregated and monitored the Facebook
content on public pages for political actors and groups, assigning tags to the posts. This data was then
used to identify and analyze content trends.250
IFES’s Social Media, Disinformation and Electoral Integrity examines the challenges disinformation
presents to electoral integrity and the responses electoral management bodies and international
nongovernmental organizations can take to mitigate threats. 251
NDI’s Disinformation and Electoral Integrity: A Guidance Document for NDI Elections Programs
describes the National Democratic Institute’s programmatic approaches to mitigating, exposing, and
countering disinformation in the electoral context. The document stresses the importance of using
open election data to deter disinformation and advocacy to counter it.252
Supporting Democracy’s Guide for Civil Society on Monitoring Social Media During Elections
provides an in-depth explanation of social media’s role in elections, its impact on the electoral
process, methodological approaches to monitoring it, data collection and analysis tool, and
approaches for making an impact with social media monitoring.253
Examples of Coalitions
The Atlantic Council is a “nonpartisan organization that galvanizes US leadership and engagement
in the world, with allies and partners, to shape solutions to global challenges.” 255 Among the
numerous capacities through which Atlantic Council conducts international work, their recently
published Democratic Defense Against Disinformation 2.0 provides a current snapshot of global
disinformation campaigns at work, proposed solutions, progress made, and criticisms and
recommendations made to the U.S. executive and judiciary branches regarding urgent steps needed in
the path toward a healthier information ecosystem.
A Counter-Disinformation System That Works details the formula Debunk.eu and its
partners are using to counter disinformation in Lithuania. Using a combination of “geeks,” or IT and
AI experts developing algorithms to detect false claims; “elves,” or volunteers who research and
debunk false stories; and journalists, who publish finished stories about the debunked claims,
Lithuanian anti-disinformation activists claim to have established a successful, fully integrated
system. 258
Russia’s Disinformation Activities and Counter-Measures: Lessons from Georgia is a report from
think-tank European Values about the main lessons learned from the fight against pro-Kremlin
disinformation in Georgia. 259
Internews is aimed at building lasting change by ensuring people have access to quality, local
information. To do so, Internews works with local partners to grow sustainable organizations and
offers capacity-building programs for media professions, human rights activists, and information
entrepreneurs. 260
IREX promotes “vibrant information and media systems.” IREX supports journalism and media
organizations through trainings on reporting, media law, media safety, and digital security. IREX also
provides additional support to consumers via media literacy programs, training citizen journalists, and
diversifying and distributing television content. 261
International Center for Journalists (ICFJ) seeks to build the expertise and storytelling skills of
journalists around the world. ICFJ focuses on five key areas: news innovation, investigative reporting,
global exchange programs, specialty journalism, and diversity promotion.262
Digital Security Project seeks to provide data on the intersection of politics and social media. Its
indicators cover topics including online censorship, polarization, misinformation campaigns, coordinated
information operations and foreign influence in and monitoring of domestic politics. It uses the Varieties
of Democracy (V-Dem) framework, also used by USAID in Journey to Self-Reliance (JSR) metrics, to
assess various digital issues including misinformation.
IREX’s Learn to Discern (L2D) is a worldwide media literacy training project for all ages that focuses
on developing healthy information engagement habits and increasing the local demand for quality
information. Its approach and curriculum are designed to meet the current needs of media consumers,
adapting to the local context. L2D has been used in Ukraine, Serbia, Tunisia, Jordan, Indonesia, and the
United States to address challenges stemming from misinformation, disinformation, propaganda, and
influence campaigns. 263
NewsWise is a free cross-curricular news literacy project for 9- to 11-year-old children across the
United Kingdom, supported by the Guardian Foundation, National Literacy Trust, the PSHE
Association, and Google. It features resources—including guides, webinars, and activities—for teachers
and families. 264
#ThinkB4UClick (Think Before You Click) is a campaign by #defyhatenow to raise awareness about the
dangers of misinformation, fake news, and hate speech in South Sudan. It seeks to educate the public on
these terms and explain how their individual actions can mitigate the issues to create safe online and
offline spaces for healthy and informed discussions.
Elves vs. Trolls is an informal internet army of Lithuanians trying to counter what they describe as hate
speech and pro-Russia propaganda.
The War on Pineapple, promoted by the U.S. Cybersecurity and Infrastructure Agency (CISA), uses the
concept of pineapple on pizza to promote understanding of foreign interference in five steps: targeting
divisive issues, moving accounts into place, amplifying and distorting the conversation, making the
mainstream, and taking the conversation into the real world.
Passive Drivers
Misinformation Effect False information suggested to individuals after the fact can influence
their perception, especially as time passes and the memory weakens.
Repeat Exposure Individuals may respond more positively to stimuli that they have seen
frequently than to stimuli they have seen only a few times; persists even
when exposure is subliminal, and individuals are unaware that they have
seen a stimulus.
Virality and Information which evokes fear, disgust, awe, anger, or anxiety may be
Heightened Emotion much more likely to be spread by individuals over social media.
Bandwagon Effect The tendency of individuals to be more likely to adopt beliefs that they
believe are common among others.
Confirmation Bias Suggests that individuals seek out information that agrees with their
preexisting beliefs.
Disconfirmation Bias Suggests that people actively reason against information which conflicts
with preexisting beliefs.
Directionally The desire to reach a specific conclusion, and thus to lend more
Motivated Reasoning credibility to information favoring that conclusion.
In-group favoritism The tendency to favor one’s “in-group” (e.g., race, gender, sexual
orientation, religious preference, partisan affiliation, geographic location,
etc.) over one’s out-group.
Prior Attitude Effect Suggests that people regard information that supports their beliefs
(“pro-attitudinal information”) as more legitimate than counter-
attitudinal information (sometimes called the prior attitude effect).
Source: Woolley, S, & Joseff, K. Demand for deceit: How the way we think drives disinformation. Working paper. National
Endowment for Democracy. https://www.ned.org/wp-content/uploads/2020/01/Demand-for-Deceit.pdf
ComProp Navigator
Interaction: disinformation toolkit for civil society
USG Resources for Analytics
Global Engagement Center’s DisinfoCloud
Global Engagement Center’s Counter-Disinformation Dispatches
4. USG Policy/Strategy
National Security Strategy, December 2017 Pillar III: Preserve peace through strength”-
Information Statecraft – “activate local network: local voices are most compelling and effective
in ideological competitions”
DoS/USAID Joint Strategic Plan FY 2018-2022 Strategic Objective 1.4: Increase capacity and
strengthen resilience of our partners and allies to deter aggression, coercion and malign
influence by state and non-state actors
USAID Countering Malign Kremlin Influence Objective 2: Resist the manipulation of information
National Defense Authorization Act 2016, 2017, 2020
USG Organizations engaged on disinformation:
– USG Organizations: Historical—The U.S. Information Agency
– DoS/Global Engagement Center (GEC)
– DoS/GPA Social Media Presence
– U.S. Agency for Global Media (USAGM)
A. INTRODUCTION—SUPPLEMENTARY RESOURCES
The Wilson Center, the Carnegie Corporation of New York, and the University of Washington hosted
a discussion on disinformation campaigns and potential ways to combat them. This video captures key
aspects of the issues presented in the Primer. Panelists provided a historical overview of disinformation
campaigns, assessed current patterns in disinformation, and discussed the potential challenges that lie
ahead. Key concepts covered included: Disinformation Defined, Goal of Disinformation, Russian
Disinformation, RT, Sputnik and Bots, and Identifying Disinformation.
https://www.c-span.org/video/?463432-1/panelists-discuss-combatting-disinformation-campaigns
ComProp Navigator is an online resource guide for CSOs to learn more about digital disinformation
topics and address their concerns; it is curated by civil society practitioners and the Project on
Computational Propaganda. 265
https://navigator.oii.ox.ac.uk
– Misinformation has created a new world disorder: Our willingness to share content without
thinking is exploited to spread disinformation by Claire Wardle in Scientific American, September
1, 2019.
– How misinformation spreads on social media—And what to do about it by Chris Meserole,
Brookings Institution, May 9, 2018.
On how disinformation spreads across online platforms/applications:
– WhatsApp as a tool for fear and intimidation in Lebanon’s protests, by Emily Lewis in Coda,
November 12, 2019.
– Disinformation from China floods Taiwan’s most popular messaging app, by Nithin Coca in
Coda, October 7, 2020
– Bakamo.Social is a strategic social listening consultancy that uses technology and human
understanding to find meaning in the millions of conversations that take place online every day.
https://www.bakamosocial.com/
– East Stratcom Task Force (EU) was set up to address Russia's ongoing disinformation campaigns.
In March 2015, the European Council tasked the High Representative in cooperation with EU
institutions and Member States to submit an action plan on strategic communication.
https://eeas.europa.eu/headquarters/headquarters-homepage/2116/-questions-and-answers-
about-the-east-stratcom-task-force_en
– KremlinWatch is a strategic program of the European Values Center for Security Policy, which
aims to expose and confront instruments of Russian influence and disinformation operations
focused against Western democracies. https://www.kremlinwatch.eu/
– Moonshot Consulting seeks to find audiences vulnerable to violent extremist and false
messaging, works to better understand them, and then builds an evidence base to deliver
campaigns and interventions to make information safer. http://moonshotcve.com/work/
– Researchers from the University of Notre Dame are using artificial intelligence to develop an
early warning system that will identify manipulated images, deepfake videos, and disinformation
online. The project is an effort to combat the rise of coordinated social media campaigns to
incite violence, sow discord, and threaten the integrity of democratic elections.
https://news.nd.edu/news/researchers-develop-early-warning-system-to-fight-disinformation-
online/
Examples of open-source intelligence (OSINT):
workshops to teach journalists, NGOs, government agencies, and other interested parties how
to use OSINT for their investigations. Bellingcat provides an online investigation toolkit that is
updated regularly and provides open-source and free software. 266
– First Draft Basic Toolkit also provides links to open-source and free software of use to
newsrooms for newsgathering, verification, and responsible reporting.
– Atlantic Council’s Digital Forensic Research Lab (DFRLab) uses open-source research to expose
and explain disinformation; the DFRLab seeks to build “the world’s leading hub of digital
forensics analysts (#DigitalSherlocks),” promoting objective truth, protecting democratic
institutions and norms, and forging greater digital resilience worldwide. 267
– “Civil Society Tracks Trolls and Fakes, Prompts Facebook Action in Moldova” is an article
describing Trolless, a platform that enabled Moldovan users to report fake profiles, troll
accounts, and suspicious activity and material. Once reported, the Trolless team would
investigate and publish their findings, providing verification or more information about the
suspected accounts and content.268
– The Hamilton 2.0 dashboard, a project of the Alliance for Securing Democracy at the German
Marshall Fund of the United States, provides a summary analysis of the narratives and topics
promoted by Russian, Chinese, and Iranian government officials and state-funded media on
Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the
United Nations. https://securingdemocracy.gmfus.org/hamilton-dashboard/
– Artificial Intelligence (AI)-Generated Propaganda lab OpenAI has already released a beta version
of GPT-3, a long-form text generator that works by taking text input and predicting what should
follow. https://openai.com/blog/openai-api/
– The Global Engagement Center (GEC) at the U.S. Department of State recommends a
combined debunking and discrediting approach, which is explained in GEC Counter-
Disinformation Dispatches #2: Three Ways to Counter Disinformation and GEC Counter-
Disinformation Dispatches #4: What Works in Debunking.
Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble
Antisocial Media: How Facebook Disconnects Us and Undermines Democracy by Siva Vaidhyanathan
Communication Power by Manuel Castells
Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media by Samuel C.
Woolley and Philip N. Howard (Editors)
How Propaganda Works by Jason Stanley
How to Lose the Information War: Russia, Fake News, and the Future of Conflict by Nina Jankowicz
Lie Machines: How to Save Democracy From Troll Armies, Deceitful Robots, Junk News Operations, and Political
Operatives by Philip Howard
Mind Over Media: Propaganda Education for a Digital Age by Renee Hobbs and Douglas Rushkoff
Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia The Surreal Heart of the New
Russia by Peter Pomerantsev
This Is Not Propaganda: Adventures in the War Against Reality Adventures in the War Against Reality by Peter
Pomerantsev
Twitter and Tear Gas: The Power and Fragility of Networked Protest by Zeynep Tufekci
ENDNOTES
1
Vosoughi, S, et al. (2018). The spread of false information online. Science (359), 1146-1151.
https://science.sciencemag.org/content/sci/359/6380/1146.full.pdf
2
See: Prevency. What is disinformation? https://prevency.com/en/what-is-disinformation/
3
Gunitsky, S. (2020, April 21). Democracies can’t blame Putin for their disinformation problem. Foreign Policy. Opinion.
https://foreignpolicy.com/2020/04/21/democracies-disinformation-russia-china-homegrown/
4
Oxford Internet Institute Report. (2019). Use of social media to manipulate public opinion now a global problem.
https://www.oii.ox.ac.uk/news/releases/use-of-social-media-to-manipulate-public-opinion-now-a-global-problem-
says-new-report/
5
Gunitsky, S. (2020, April 21). Democracies can’t blame Putin for their disinformation problem. Foreign Policy. Opinion.
https://foreignpolicy.com/2020/04/21/democracies-disinformation-russia-china-homegrown/
6
See the EU’s 2019 concept note “How to spot when news is fake”:
https://epthinktank.eu/2018/04/24/online-disinformation-and-the-eus-response/how-to-spot-when-news-is-fake-
blog/
7
See: Storyful Intelligence. (2018, September 24). Misinformation and Disinformation. White paper. Storyful.
https://storyful.com/thought-leadership/misinformation-and-disinformation/
8
Pennycook, G., & Rand, D. G. (2018). Lazy, not biased: Susceptibility to partisan fake news is better explained by
lack of reasoning than by motivated reasoning. Cognition (188): 39-50.
https://doi.org/10.1016/j.cognition.2018.06.011
9
Johnston, J. (2020). Disinformation poses ‘existential threat’ to democracy, parliamentary committee warns.
PublicTechnology.net. https://www.publictechnology.net/articles/news/disinformation-poses-%E2%80%98existential-
threat%E2%80%99-democracy-parliamentary-committee-warns
10
Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. New York:
Oxford University Press.
11
“Autocratic rulers … benefit from the distrust, cynicism, and social atomization produced by disinformation,
precisely because it inhibits political engagement and paralyzes organized social movements. Social media echo
chambers, deepfakes, and automated surveillance have all served to lower the costs of autocracy while
undermining the benefits of open democratic deliberation. Far from being overwhelmed by free information,
dictators are increasingly learning to take advantage of it.” From: Gunitsky, S. (2020, April 21). Democracies can’t
blame Putin for their disinformation problem. Foreign Policy. Opinion.
https://foreignpolicy.com/2020/04/21/democracies-disinformation-russia-china-homegrown/
12
See, for example: Fremis, A. G. (2020, November 1). Combating disinformation: Policy analysis and recommendations
for the 21st century information environment. Atlantic Forum. https://atlantic-forum.com/content/combating-
disinformation-policy-analysis-and-recommendations-21st-century-information
13
Policy Department for Citizens’ Rights and Constitutional Affairs. (2019). Disinformation and propaganda: Impact
on the functioning of the rule of law in the EU and its Member States.
14
Doxing and trolling are forms of online harassment. See: PEN America’s Online Harassment Field Manual for a
very helpful overview of the key terms and breakdown of different categories related to cyber-harassment or
cyber-abuse: https://onlineharassmentfieldmanual.pen.org/defining-online-harassment-a-glossary-of-terms/
15
Echo chamber is where a group chooses to preferentially connect with each other, to the exclusion of outsiders.
Filter bubble is where a group chooses to preferentially communicate with each other, to the exclusion of
outsiders. See Annex I for a glossary of useful disinformation terms.
16
Clare Wardle is the cofounder of First Draft, a nonprofit organization formed in 2016 to protect communities
from harmful misinformation. Wardle is considered a leading expert on information disorder. See:
https://firstdraftnews.org
17
Wardle, C. (2020). Training: Understand the landscape of information disorder. First Draft.
https://firstdraftnews.org/training/information-disorder/
18
Habgood-Coote, J. (2018). The term ‘fake news’ is doing great harm. The Conversation.
https://theconversation.com/the-term-fake-news-is-doing-great-harm-100406
19
Dema, T. (2017). Media literacy vital to prebunk and debunk fake news. Kuensel.
https://lkyspp.nus.edu.sg/docs/default-source/ips/kuensel_media-literacy-vital-to-prebunk-and-debunk-fake-
news_190817-pdf.pdf?sfvrsn=c41b9f0b_0
20
National Endowment for Democracy. (2020). From democratic regression to ‘third reverse wave.’ Democracy Digest.
https://www.demdigest.org/from-democratic-regression-to-third-reverse-wave/
21
Freedom House. (2019). Countries and territories. https://freedomhouse.org/countries/freedom-world/scores
22
Freedom House. (2012). https://freedomhouse.org/sites/default/files/2020-02/FIW_2010_Overview_Essay.pdf
23
The topic of zero-rated content such as Facebook’s Free Basics is of considerable debate in the internet freedom
and digital rights community and increasingly for independent media support. According to a 2017 report by
Berkman Klein Center, “Zero rating, which allows users to access select Internet services and content without
incurring mobile data charges, is not a new concept. But it has become an object of debate as mobile carriers and
major app providers have used it in the developing world to attract customers, with the goal of increasing Internet
access and adoption.” For the full report, see: https://cyber.harvard.edu/publications/2017/10/zerorating. A
significant number of USAID partner countries receive Internet access through the zero-rating scheme:
https://en.wikipedia.org/wiki/Zero-rating
24
Reuters. (2020, August 11). UN investigator says Facebook has not shared ‘evidence’ of Myanmar crime. Malay Mail.
https://www.malaymail.com/news/world/2020/08/11/un-investigator-says-facebook-has-not-shared-evidence-of-
myanmar-crime/1892905
25
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook J. (2012, December). Misinformation and its
correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 13(3):106-31.
https://journals.sagepub.com/doi/full/10.1177/1529100612451018
26
2020 Edelman Trust Barometer. (2020, January 19). https://www.edelman.com/trustbarometer
27
Associated Press. (2020, February 7). Cyborgs, trolls and bots: A guide to online misinformation. Snopes.
https://www.snopes.com/ap/2020/02/07/cyborgs-trolls-and-bots-a-guide-to-online-misinformation/
28
See: Bennett, L, & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of
democratic institutions. European Journal of Communication.
29
Edelman Trust Barometer 2020 Global Report.
https://cdn2.hubspot.net/hubfs/440941/Trust%20Barometer%202020/2020%20Edelman%20Trust%20Barometer%20
Global%20Report.pdf?utm_campaign=Global:%20Trust%20Barometer%202020&utm_source=Website
30
Proquest. How to identify fake news in 10 steps. Worksheet. https://blogs.proquest.com/wp-
content/uploads/2017/01/Fake-News1.pdf
31
Statista. (2020, January.) Global internet penetration rate as of January 2020, by region.
https://www.statista.com/statistics/269329/penetration-rate-of-the-internet-by-region/
32
Silver, L. (2019). Smartphone ownership is growing rapidly around the world, but not always equally. Pew Research
Center's Global Attitudes Project. https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-
growing-rapidly-around-the-world-but-not-always-equally/
33
Buchholz, K. (2020). Where do people spend the most time on social media. Statista.
https://www.statista.com/chart/18983/time-spent-on-social-media/
34
Lamb, K. (2019). Philippine tops world internet usage index with an average of 10 hours a day. The Guardian.
https://www.theguardian.com/technology/2019/feb/01/world-internet-usage-index-philippines-10-hours-a-day
35
Smith, K. 60 incredible and interesting Twitter stats and statistics. Brandwatch.
https://www.brandwatch.com/blog/twitter-stats-and-statistics/
36
Statista. (2020, July). Leading countries based on Facebook audience size as of July 2020.
https://www.statista.com/statistics/268136/top-15-countries-based-on-number-of-facebook-users/
37
Statista. (2020, June). Share of adults who use social media as a sources of news in selected countries. Statista.
https://www.statista.com/statistics/718019/social-media-news-source/
38
Farrell, M. (2019). The Internet must be more than Facebook. OneZero. https://onezero.medium.com/the-internet-
must-be-more-than-facebook-4ba1d86403fb
39
Farrell, M. (2019). The Internet must be more than Facebook. OneZero. https://onezero.medium.com/the-internet-
must-be-more-than-facebook-4ba1d86403fb
40
Moon, M. (2017, June 25). WhatsApp is becoming a top news source in some countries. Engadget.
https://www.engadget.com/2017-06-25-whatsapp-news-source-reuters-study.html
41
Sahir. (2019, February 11). WhatsApp usage, revenue, market share and other statistics (2019). Digital Information
World. https://www.digitalinformationworld.com/2019/02/whatsapp-facts-stats.html
42
Silver, L. (2019). Smartphone ownership is growing rapidly around the world, but not always equally. Pew Research
Center's Global Attitudes Project. https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-
growing-rapidly-around-the-world-but-not-always-equally/
43
Lardieri, A. (2019). Older people more susceptible to take news, more likely to share it.
https://www.usnews.com/news/politics/articles/2019-01-09/study-older-people-are-more-susceptible-to-fake-news-
more-likely-to-share-it
44
Loos, E. & Nijenhuis, J. (2020). Consuming Fake News: A Matter of Age? The perception of political fake news
stories in Facebook ads.
https://www.researchgate.net/publication/336944922_Consuming_Fake_News_A_Matter_of_Age_The_perceptio
n_of_political_fake_news_stories_in_Facebook_ads
45
Kight, S. W. (2020). Gen Z is eroding the power of misinformation. Axios. https://www.axios.com/gen-z-is-eroding-
the-power-of-misinformation-5940e3cd-e3d0-44a1-b66c-93be45fe1d2c.html
46
Kovacs, K. (2020). Gen Z has a misinformation problem. Digital Content Next.
https://digitalcontentnext.org/blog/2020/06/04/gen-z-has-a-misinformation-problem
47
Mackintosh, Eliza, for CNN, Finland is winning the war on fake news. What it’s learned may be crucial to
Western democracy, https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/
48
Brashier, N. M., et al. (2017). Competing cues: Older adults rely on knowledge in the face of fluency. Psychology
and aging (32)4: 331-337. https://doi.apa.org/doiLanding?doi=10.1037%2Fpag0000156
49
Jacoby, L .L., & Rhodes, M. G. (2006). False remembering in the aged. Current Directions in Psychological Science
(15)2: 49-53. https://journals.sagepub.com/doi/10.1111/j.0963-7214.2006.00405.x
50
Silver, L. (2019). Smartphone ownership is growing rapidly around the world, but not always equally. Pew Research
Center's Global Attitudes Project. https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-
growing-rapidly-around-the-world-but-not-always-equally/
51
Vered, E. (2020). Middle Eastern journalists targeted by misogynistic smear campaigns. Newsroom. International
Press Institute. https://ipi.media/middle-eastern-journalists-targeted-by-misogynistic-smear-campaigns/
52
Vered, E. (2020). Middle Eastern journalists targeted by misogynistic smear campaigns. Newsroom. International
Press Institute. https://ipi.media/middle-eastern-journalists-targeted-by-misogynistic-smear-campaigns/
53
Gendered Disinformation, Fake News, and Women in Politics: https://www.cfr.org/blog/gendered-
disinformation-fake-news-and-women-politics
54
Reporters Without Borders. (2018). Women’s rights: Forbidden subject. Reporters Without Borders.
https://rsf.org/sites/default/files/womens_rights-forbidden_subject.pdf
55
Posetti, J., Harrison, J., & Waisbord, S. Online attacks on women journalists leading to ‘real world’ violence, new
research shows. ICFJ. https://www.icfj.org/news/online-attacks-women-journalists-leading-real-world-violence-new-
research-shows
56
Ayyub, R. (2018). I was the victim of a deepfake porn plot intended to silence me: Rana Ayyub. HuffPost India.
https://www.huffpost.com/archive/in/entry/deepfake-porn_a_23595592
57
Ayyub, R. (2018). I was the victim of a deepfake porn plot intended to silence me: Rana Ayyub. HuffPost India.
https://www.huffpost.com/archive/in/entry/deepfake-porn_a_23595592
58
Macavaney, S, et al. (2019). Hate speech detection: Challenges and solutions. Plos One (14)8.
https://doi.org/10.1371/journal.pone.0221152
59
Macavaney, S, et al. (2019). Hate speech detection: Challenges and solutions. Plos One (14)8.
https://doi.org/10.1371/journal.pone.0221152
60
PeaceTech Lab. Combating hate speech: Identifying, monitoring and combating hate speech on social media.
https://www.peacetechlab.org/hate-speech
61
Williams, M. L., Burnap, P., Javed, A., Liu, H., & Ozalp, S. (2020). Hate in the machine: Anti-Black and anti-Muslim
social media posts as predictors of offline racially and religiously aggravated crime. British Journal of Criminology.
https://academic.oup.com/bjc/article/60/1/93/5537169
62
Dangerous Speech Project. https://dangerousspeech.org/.
63
Dangerous Speech Project. https://dangerousspeech.org/.
64
Stricklin, K. (2020). Why does Russia use disinformation? Lawfare. https://www.lawfareblog.com/why-does-russia-
use-disinformation
65
Nimmo, B. (2015, May 19). Anatomy of an info-war: How Russia’s propaganda machine works, and how to counter it.
StopFake.org. https://www.stopfake.org/en/anatomy-of-an-info-war-how-russia-s-propaganda-machine-works-and-
how-to-counter-it/
66
Ellick, A.B., & Westbrook, A. (2018). Operation Infektion: A three-part video series on Russian disinformation. New
York Times. https://www.nytimes.com/2018/11/12/opinion/russia-meddling-disinformation-fake-news-elections.html
67
Young, C.W. (1987). Congressional Record: Soviet active measures in the United States—An updated report by the FBI.
New York Times. https://www.cia.gov/library/readingroom/document/cia-rdp11m01338r000400470089-2
68
Ellick, AB, & Westbrook, A. (2018). Operation Infektion: A three-part video series on Russian disinformation. New
York Times. https://www.nytimes.com/2018/11/12/opinion/russia-meddling-disinformation-fake-news-elections.html
69
Ellick, A. B., & Westbrook, A. (2018). Operation Infektion: A three-part video series on Russian disinformation. New
York Times. https://www.nytimes.com/2018/11/12/opinion/russia-meddling-disinformation-fake-news-elections.html
70
Bugayova, N, & Barros, G. (2020). The Kremlin’s expanding media conglomerate. Institute for the Study of War.
http://www.understandingwar.org/backgrounder/kremlin%E2%80%99s-expanding-media-conglomerate
71
Wardle, C., & Derakhshan, H. (2017). Information disorder: Towards an interdisciplinary framework for
research and policy making. Council of Europe report. https://rm.coe.int/information-disorder-report-version-
august-2018/16808c9c77
72
Nimmo, B. (2015, May 19). Anatomy of an info-war: How Russia’s propaganda machine works, and how to
counter it. StopFake.org. https://www.stopfake.org/en/anatomy-of-an-info-war-how-russia-s-propaganda-machine-
works-and-how-to-counter-it/
73
The White Helmets are a volunteer organization that operates in parts of opposition-controlled Syria and in
Turkey. Formed in 2014 during the Syrian Civil War, the majority of the volunteers' activity in Syria consists of
medical evacuation, urban search and rescue in response to bombing, evacuation of civilians from danger areas, and
essential service delivery.
74
Chulov, M, (2020, October 27). How Syria's disinformation wars destroyed the co-founder of the White Helmets.
Guardian. https://www.theguardian.com/news/2020/oct/27/syria-disinformation-war-white-helmets-mayday-rescue-
james-le-mesurier
75
GEC. (August 2020). Pillars of Russia’s disinformation and propaganda ecosystem. Special report.
https://www.state.gov/wp-content/uploads/2020/08/Pillars-of-Russia%E2%80%99s-Disinformation-and-Propaganda-
Ecosystem_08-04-20.pdf
76
Nimmo, B. (2015, May 19). Anatomy of an info-war: How Russia’s propaganda machine works, and how to counter it.
StopFake.org. https://www.stopfake.org/en/anatomy-of-an-info-war-how-russia-s-propaganda-machine-works-and-
how-to-counter-it/
77
Ellick, A. B., & Westbrook, A. (2018). Operation Infektion: A three-part video series on Russian disinformation. New
York Times. https://www.nytimes.com/2018/11/12/opinion/russia-meddling-disinformation-fake-news-elections.html
78
Statement of Lea Gabrielle, Special Envoy & Coordinator for the Global Engagement Center, U.S. Department of
State, Before the Senate Foreign Relations Subcommittee on State Department and USAID Management,
International Operations, and Bilateral International Development, Thursday, March 5, 2020,
https://www.foreign.senate.gov/imo/media/doc/030520_Gabrielle_Testimony.pdf
79
Statement of Lea Gabrielle, Special Envoy & Coordinator for the Global Engagement Center, U.S. Department of
State, Before the Senate Foreign Relations Subcommittee on State Department and USAID Management,
International Operations, and Bilateral International Development, Thursday, March 5, 2020,
https://www.foreign.senate.gov/imo/media/doc/030520_Gabrielle_Testimony.pdf
80
See: Swan, B. W. (2020, April 21). State report: Russian, Chinese and Iranian disinformation narratives echo one
another. Politico. https://www.politico.com/news/2020/04/21/russia-china-iran-disinformation-coronavirus-state-
department-193107
81
Vilmer, J. J., & Charon, P. (2020, January 21). Russia as a hurricane, China as climate change: Different ways of
information warfare. War on the Rocks. https://warontherocks.com/2020/01/russia-as-a-hurricane-china-as-climate-
change-different-ways-of-information-warfare/
82
Vilmer, J. J., & Charon, P. (2020, January 21). Russia as a hurricane, China as climate change: Different ways of
information warfare. War on the Rocks. https://warontherocks.com/2020/01/russia-as-a-hurricane-china-as-climate-
change-different-ways-of-information-warfare/
83
Vilmer, J. J., & Charon, P. (2020, January 21). Russia as a hurricane, China as climate change: Different ways of
information warfare. War on the Rocks. https://warontherocks.com/2020/01/russia-as-a-hurricane-china-as-climate-
change-different-ways-of-information-warfare/
84
Vilmer, J. J., & Charon, P. (2020, January 21). Russia as a hurricane, China as climate change: Different ways of
information warfare. War on the Rocks. https://warontherocks.com/2020/01/russia-as-a-hurricane-china-as-climate-
change-different-ways-of-information-warfare/
85
Cook, S. (2020). Beijing’s global megaphone. Freedom House. Special Report 2020.
https://freedomhouse.org/report/special-report/2020/beijings-global-megaphone
86
Cook, S. (2020). Beijing’s global megaphone. Freedom House. Special Report 2020.
https://freedomhouse.org/report/special-report/2020/beijings-global-megaphone
87
Cook, S. (2020). Beijing’s global megaphone. Freedom House. Special Report 2020.
https://freedomhouse.org/report/special-report/2020/beijings-global-megaphone; Qing, K. G., & Shiffman, J. (2015,
November 2). Exposed: China’s Covert Global Radio Network. Reuters. https://www.reuters.com/investigates/special-
report/china-radio/
88
Cook, S. (2020). Beijing’s global megaphone. Freedom House. Special Report 2020.
https://freedomhouse.org/report/special-report/2020/beijings-global-megaphone; Lim, L, & Bergin, J. (2018,
December 7). Inside China’s audacious global propaganda campaign. The Guardian.
https://www.theguardian.com/news/2018/dec/07/china-plan-for-global-media-dominance-propaganda-xi-jinping
89
Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social media
manipulation. Computational Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2018/07/ct2018.pdf
90
Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social media
manipulation. Computational Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2018/07/ct2018.pdf
91
https://www.merriam-webster.com/dictionary/clickbait
92
https://www.techopedia.com/definition/16897/content-farm
93
Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social media
manipulation. Computational Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2018/07/ct2018.pdf
94
Duignan, B. Gaslighting. Encyclopedia entry. Britannica. https://www.britannica.com/topic/gaslighting
95
Hartz, J. (2018). Supporting information integrity and civil political discourse. NDI.
96
https://www.merriam-webster.com/dictionary/microtarget
97
Wardle, C. (2018, July). Information Disorder: The essential glossary. Shorenstein Center on Media, Politics & Public
Policy. https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf
98
Wardle, C. (2018, July). Information Disorder: The essential glossary. Shorenstein Center on Media, Politics & Public
Policy. https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf ; Hartz, J. (2018).
Supporting information integrity and civil political discourse. NDI.
99
Wardle, C. (2018, July). Information Disorder: The essential glossary. Shorenstein Center on Media, Politics & Public
Policy. https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf; Hartz, J. (2018). Supporting
information integrity and civil political discourse. NDI.
100
For more on computational propaganda, see: Frank, A. Computational propaganda: Bots, targeting and the future.
Opinion. NPR. https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-
thing-now
101
Glenski, M., Stoddard, G., Resnick, P., & Weninger, T. Guess the karma: A game to assess social rating systems.
Proc. of the ACM Conference on Computer Supported Collaborative Work (CSCW) 2018.
102
For more insight the influence of algorithms, how they can bias what we see online, and how they can tip the
scales of democracy, see the recent coverage of Facebook’s newsfeed changes in an October 21, 2020, article by
Monika Bauerlein and Clara Jeffery for Mother Jones: https://www.motherjones.com/media/2020/10/facebook-
mother-jones/. While the article is very United States focused, it shows how changes made to accommodate U.S.
priorities can have huge and unknown impacts elsewhere. Mother Jones says their revenue was decimated as a
result of the practice described in the article, leading the reader to question: how are independent media
elsewhere in the world being affected?
103
Please note that coordinated inauthentic behavior is a term coined by Facebook and without a really clear
definition. The definition provided by Facebook is itself a bit misleading, as they do not enforce CIB takedowns on
everything that matches this definition and instead talk about vague “thresholds.” Other companies use different
internal policies. Evelyn Douek in her Slate article Slate “What Does ‘Coordinated Inauthentic Behavior’ Actually
Mean?” sums up the situation by saying, “Most commonly used when talking about foreign influence operations, the
phrase sounds technical and objective, as if there’s an obvious category of online behavior that crosses a clearly
demarcated line between OK and not OK.” But, as Douek explains, there is no clear definition, and this is the
problem. See: https://slate.com/technology/2020/07/coordinated-inauthentic-behavior-facebook-twitter.html
104
Facebook. Community standards. https://www.facebook.com/communitystandards/inauthentic_behavior
105
Ghosh, D., & Scott, B. (2018). Digital deceit: The technologies behind precision propaganda on the Internet. New
America. https://d1y8sb8igg2f8e.cloudfront.net/documents/digital-deceit-final-v3.pdf
106
Nimmo, B. (2020). Video: Ben Nimmo on influence operations and the Breakout Scale. TechStream Staff, Brookings
107
Ly, O., & Wardle, C. (2020). The breakdown: Claire Wardle on journalism and disinformation. Video.
https://www.youtube.com/watch?v=aZdoARAOxks.
108
Note: Discord and Twitch and 4Chan were not set up for far-right extremists—the extremists use these
platforms as a place to communicate and share information and build communities. Twitch and Discord are used
by many gamers, DJs, Dungeon & Dragon enthusiasts, and others. It is important to note that these are
unmoderated forums with a free-speech absolutist bent. They therefore host content that may not appear on a
mainstream platform such Twitter or Facebook, which have policies and community standards put limits on the
nature of content they host; de facto, these sites are known to host many far-right extremist groups.
109
Bisen, A. (2019, April 24). Disinformation is drowning democracy: In the new age of lies, law, not tech, is the answer.
Foreign Policy. https://foreignpolicy.com/2019/04/24/disinformation-is-drowning-democracy/
110
Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social media
manipulation. Computational Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2018/07/ct2018.pdf
111
Silverman, C. (2019). The Philippines was a test of Facebook’s new approach to countering disinformation. Things got
worse. Buzzfeed News. https://www.buzzfeednews.com/article/craigsilverman/2020-philippines-disinformation
112
Alba, D. (2018). How Duterte used Facebook to fuel the Philippine drug war. BuzzFeed News.
https://www.buzzfeednews.com/article/daveyalba/facebook-philippines-dutertes-drug-war
113
Alba, D. (2018). How Duterte used Facebook to fuel the Philippine drug war. BuzzFeed News.
https://www.buzzfeednews.com/article/daveyalba/facebook-philippines-dutertes-drug-war
114
Rappler. (2017). Senator Leila de Lima arrested. https://www.rappler.com/nation/leila-de-lima-surrender-drug-
charges
115
Alba, D. (2018). How Duterte used Facebook to fuel the Philippine drug war. BuzzFeed News.
https://www.buzzfeednews.com/article/daveyalba/facebook-philippines-dutertes-drug-war
116
Hofileña, CF. (2016). Fake accounts, manufactured on social media. Rappler.
https://www.rappler.com/newsbreak/investigative/fake-accounts-manufactured-reality-social-media
117
Yaffa, J. (2020). Is Russian meddling as dangerous as we think? New Yorker.
https://www.newyorker.com/magazine/2020/09/14/is-russian-meddling-as-dangerous-as-we-think
118
Woolley, S, & Joseff, K. Demand for deceit: How the way we think drives disinformation. Working paper.
National Endowment for Democracy. https://www.ned.org/wp-content/uploads/2020/01/Demand-for-Deceit.pdf
119
Woolley, S, & Joseff, K. Demand for deceit: How the way we think drives disinformation. Working paper.
National Endowment for Democracy. https://www.ned.org/wp-content/uploads/2020/01/Demand-for-Deceit.pdf.
Wooley and Joseff say that confirmation bias suggests that individuals seek out information that is in agreement
with their preexisting beliefs.
120
Begg, I. M., Anas, A., & Farinacci, S. Dissociation of processes in belief: Source recollection, statement familiarity,
and the illusion of truth. Journal of Experimental Psychology 121, no. 4 (1992): 446–458.
121
Owen, L. H. (2019, March 22). The “backfire effect” is mostly a myth, a broad look at the research suggests. Nieman
Lab. https://www.niemanlab.org/2019/03/the-backfire-effect-is-mostly-a-myth-a-broad-look-at-the-research-
suggests/
122
Owen, L. H. (2019, March 22). The “backfire effect” is mostly a myth, a broad look at the research suggests. Nieman
Lab. https://www.niemanlab.org/2019/03/the-backfire-effect-is-mostly-a-myth-a-broad-look-at-the-research-
suggests/
123
Owen, L. H. (2019, March 22). The “backfire effect” is mostly a myth, a broad look at the research suggests. Nieman
Lab. https://www.niemanlab.org/2019/03/the-backfire-effect-is-mostly-a-myth-a-broad-look-at-the-research-
suggests/
124
Pennycook, G, & Rand, D. (2019). Why do people fall for fake news? New York Times. Opinion.
https://www.nytimes.com/2019/01/19/opinion/sunday/fake-news.html
125
Pennycook, G, & Rand, D. (2019). Why do people fall for fake news? New York Times. Opinion.
https://www.nytimes.com/2019/01/19/opinion/sunday/fake-news.html
126
Hind, M. (2018). “Disinformation is a demand-side—not a supply-side—problem.” Medium.
https://medium.com/@mikehind/the-debunking-solution-is-fake-news-59565f60ca71
127
Pennycook, G, & Rand, D. (2019). “Why do people fall for fake news?.” New York Times. Opinion.
https://www.nytimes.com/2019/01/19/opinion/sunday/fake-news.html
128
See Glossary -- Shorenstein Center, 2018
129
Donovan, Joan (2019), How memes got weaponized: A short history. Memes come off as a joke, but some
people see them as the serious threat they are. https://www.technologyreview.com/2019/10/24/132228/political-
war-memes-disinformation/
130
Van Tatenhove. (2020), “How Memes Can Spread Disinformation,” Utah Public Radio
131
DFR Lab. (2020). Facebook removed inauthentic network connected to United Russia party.
https://medium.com/dfrlab/facebook-removed-inauthentic-network-connected-to-united-russia-party-6b9cfd2332de
132
See: Monaco, N. & Arnaudo, D. (2020). Guidance on social media monitoring and analysis techniques, tools and
methodologies. NDI.
133
See: Strick, B. (2019, October 11). Investigating information operations in West Papua: A digital forensic case
study of cross-platform network analysis. https://www.bellingcat.com/news/rest-of-world/2019/10/11/investigating-
information-operations-in-west-papua-a-digital-forensic-case-study-of-cross-platform-network-analysis/
134
Nimmo, B., Ronzaud, Léa, Eib, C. Shawn, Ferreira, R, (2020). GRAPHIKA REPORT, Myanmar Military Network,
https://graphika.com/reports/myanmar-military-network/
135
Nimmo, B., Ronzaud, Léa, Eib, C. Shawn, Ferreira, R, (2020). GRAPHIKA REPORT, Myanmar Military Network,
https://graphika.com/reports/myanmar-military-network/
136
See: Lopez, E. (2020, September 23.) Philippines: fake accounts shut down by Facebook promoted Duterte and
China. This Week in Asia. https://www.scmp.com/week-asia/politics/article/3102765/philippines-fake-accounts-shut-
down-facebook-promoted-duterte)
137
DFRLab. (2020). Coronavirus conspiracies amplified on social media by fringe South African political party.
Medium .https://medium.com/dfrlab/coronavirus-conspiracies-amplified-on-social-media-by-fringe-south-african-
political-party-fe3e3157bf97
138
Nimmo, B., Ronzaud, L., Eib, C. S., & Ferreira, R. Mynamar Military Network coordinated inauthentic behavior traced
to members of the Myanmar military before elections. Graphika. https://public-
assets.graphika.com/reports/graphika_report_myanmar_military_network.pdf
139
See, for example, Monaco, N. & Arnaudo, D. (2020). Guidance on social media monitoring and analysis techniques,
tools and methodologies. NDI.
https://www.ndi.org/sites/default/files/NDI_Social%20Media%20Monitoring%20Guide%20ADJUSTED%20COVER.pd
f
140
Donovan, J. (2020, September 28). How civil society can combat misinformation and hate speech without making it
worse. Medium. https://medium.com/political-pandemonium-2020/how-civil-society-can-combat-misinformation-and-
hate-speech-without-making-it-worse-887a16b8b9b6
141
Stewart, M. C., & Arnold, C. L. (2018). Defining social listening: Recognizing an emerging dimension of listening.
International Journal of Listening, 32.
https://www.tandfonline.com/doi/abs/10.1080/10904018.2017.1330656?scroll=top&needAccess=true&journalCode
=hijl20
142
Stewart, P. (2019, June 20). Interview: Behind the Internet’s quiet revolution. Human Rights Watch.
https://www.hrw.org/news/2019/06/20/interview-behind-internets-quiet-revolution#
143
Beauman, N. (2018, August 30). How to conduct an open-source investigation, according to the founder of Bellingcat.
New Yorker. https://www.newyorker.com/culture/culture-desk/how-to-conduct-an-open-source-investigation-
according-to-the-founder-of-bellingcat
144
PBS. (2019, January 31). How the decline of newspapers creates ‘news deserts’ around the country. Interview of Steve
Cavendish by Judy Woodruff. https://www.pbs.org/newshour/show/how-the-decline-of-local-newspapers-
exacerbates-polarization
145
PBS. (2019, January 31). How the decline of newspapers creates ‘news deserts’ around the country. Interview of Steve
Cavendish by Judy Woodruff. https://www.pbs.org/newshour/show/how-the-decline-of-local-newspapers-
exacerbates-polarization
146
PBS. (2019, January 31). How the decline of newspapers creates ‘news deserts’ around the country. Interview of Steve
Cavendish by Judy Woodruff. https://www.pbs.org/newshour/show/how-the-decline-of-local-newspapers-
exacerbates-polarization
147
Pink slime journalism is named for the meat byproduct used as a food additive. This report uses the term to
refer to a low-cost way of distributing news stories; other societies, however, may use the term “pink slime
journalism” to refer to other issues.
148
Bengani, P. (2019, December 18). Hundreds of “pink slime” local news outlets are distributing algorithmic stories and
conservative talking points. CJR. https://www.cjr.org/tow_center_reports/hundreds-of-pink-slime-local-news-outlets-
are-distributing-algorithmic-stories-conservative-talking-
points.php?utm_source=share&utm_medium=ios_app&utm_name=iossmf
149
First Draft. (2019). Year in review: “The biggest threat is failing to address the reality of online alternative media
ecosystems.” https://firstdraftnews.org/latest/year-in-review-the-biggest-threat-is-failing-to-address-the-reality-of-
online-alternative-media-ecosystems/
150
First Draft. (2019). Year in review: “The biggest threat is failing to address the reality of online alternative media
ecosystems.” https://firstdraftnews.org/latest/year-in-review-the-biggest-threat-is-failing-to-address-the-reality-of-
online-alternative-media-ecosystems/
151
First Draft. (2019). Year in review: “The biggest threat is failing to address the reality of online alternative media
ecosystems.” https://firstdraftnews.org/latest/year-in-review-the-biggest-threat-is-failing-to-address-the-reality-of-
online-alternative-media-ecosystems/
152
Marwick, A., & Partin, W. (2020). QAnon shows that the age of alternative facts will not end with Trump.
Columbia Journalism Review. https://www.cjr.org/opinion/qanon-trump-alternative-facts.php
153
Marwick, A., & Partin, W. (2020). QAnon shows that the age of alternative facts will not end with Trump.
Columbia Journalism Review. https://www.cjr.org/opinion/qanon-trump-alternative-facts.php
154
Paquette, D. (2019). Nigeria’s ‘fake news’ bill could jail people for lying on social media. Critics call it censorship.
Washington Post. https://www.washingtonpost.com/world/africa/nigerias-fake-news-bill-could-jail-people-for-lying-
on-social-media-critics-call-it-censorship/2019/11/25/ccf33c54-0f81-11ea-a533-90a7becf7713_story.html
155
Westerman, A. (2019). ‘Fake news” law goes into effect in Singapore, worrying free speech advocates. NPR.
https://www.npr.org/2019/10/02/766399689/fake-news-law-goes-into-effect-in-singapore-worrying-free-speech-
advocates.
156
Robenston, O, et al. (2019, August). A report on anti-disinformation initiatives. Oxford Technology & Elections
Commission. https://oxtec.oii.ox.ac.uk/wp-content/uploads/sites/115/2019/08/OxTEC-Anti-Disinformation-
Initiatives-1.pdf
157
RadioFreeEurope RadioLiberty. (2020, January 23). Media freedom groups express unease over Ukrainian
disinformation bill. https://www.rferl.org/a/media-freedom-groups-express-unease-over-ukrainian-disinformation-
bill/30393814.html?utm_source=First+Draft+Subscribers&utm_campaign=627671a6a5-
EMAIL_CAMPAIGN_2019_10_29_11_33_COPY_03&utm_medium=email&utm_term=0_2f24949eb0-
627671a6a5-266594669&mc_cid=627671a6a5&mc_eid=302662f0e7; The Wire Staff. (2020, April 9). Ukraine’s
disinformation law threatens press freedom, WAN-IFRA cautions. https://thewire.in/media/ukraine-disinformation-law-
press-freedom
158
#KeepItOn. Retrieved October 14, 2020, from https://www.accessnow.org/keepiton/
159
West, D. (2016) Internet shutdowns cost countries $2.4 billion last year. https://www.brookings.edu/wp-
content/uploads/2016/10/intenet-shutdowns-v-3.pdf
160
Taylor, C. (2020). Government-led internet shutdowns cost the global economy $8 billion in 2019, research
says. https://www.cnbc.com/2020/01/08/government-led-internet-shutdowns-cost-8-billion-in-2019-study-says.html
161
Engler, A. (2019, November 14). Fighting deepfakes when detection fails. Brookings.
https://www.brookings.edu/research/fighting-deepfakes-when-detection-fails/
162
Engler, A. (2019, November 14). Fighting deepfakes when detection fails. Brookings.
https://www.brookings.edu/research/fighting-deepfakes-when-detection-fails/
163
Paris, B., & Donovan, J. (2019, September 18). Deepfakes and cheap fakes: The manipulation of audio and visual
evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/
164
Paris, B., & Donovan, J. (2019, September 18). Deepfakes and cheap fakes: The manipulation of audio and visual
evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/
165
Paris, B., & Donovan, J. (2019, September 18). Deepfakes and cheap fakes: The manipulation of audio and visual
evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/
166
Paris, B., & Donovan, J. (2019, September 18). Deepfakes and cheap fakes: The manipulation of audio and visual
evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/
167
DiResta, R. (2020, September 20). The supply of disinformation will soon be infinite. The Atlantic.
https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/
168
DiResta, R. (2020, September 20). The supply of disinformation will soon be infinite. The Atlantic.
https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/
169
DiResta, R. (2020, September 20). The supply of disinformation will soon be infinite. The Atlantic.
https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/
170
DiResta, R. (2020, September 20). The supply of disinformation will soon be infinite. The Atlantic.
https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/
171
DiResta, R. (2020, September 20). The supply of disinformation will soon be infinite. The Atlantic.
https://www.theatlantic.com/ideas/archive/2020/09/future-propaganda-will-be-computer-generated/616400/
172
Wardle, C., & Derakhshan, H. (2017). Information Disorder: Towards an Interdisciplinary framework for research and
policy making. Council of Europe report. https://rm.coe.int/information-disorder-report-version-august-
2018/16808c9c77
173
Stencel, M., & Luther, J. (2019, November 21). Reporters’ Lab fact-checking tally tops 200. Duke Reporters’ Lab.
https://reporterslab.org/reporters-lab-fact-checking-tally-tops-200/
174
Fried, D, & Polyakova, A. (2019). Democratic defense against disinformation 2.0. Atlantic Council.
https://www.atlanticcouncil.org/in-depth-research-reports/report/democratic-defense-against-disinformation-2-0/
175
Stencel, M., & Luther, J. (2019, November 21). Reporters’ Lab fact-checking tally tops 200. Duke Reporters’ Lab.
https://reporterslab.org/reporters-lab-fact-checking-tally-tops-200/
176
The International Fact-Checking Network. Retrieved September 24, 2020, from https://www.poynter.org/ifcn/
177
Walter, N., et al. (2019). Fact-Checking: A meta-analysis of what works and for whom. Political Communication
(37)3: 350–75. https://doi.apa.org/doiLanding?doi=10.1037%2Fpag0000156
178
Walter, N., et al. (2019). Fact-Checking: A meta-analysis of what works and for whom. Political Communication
(37)3: 350–75. https://doi.apa.org/doiLanding?doi=10.1037%2Fpag0000156
179
Walter, N., et al. (2019). Fact-Checking: A meta-analysis of what works and for whom. Political Communication
(37)3: 350–75. https://doi.apa.org/doiLanding?doi=10.1037%2Fpag0000156
180
The Redirect Method. Retrieved September 15, 2020, from https://redirectmethod.org/
181
See Bad News: https://www.getbadnews.com. Also see the study on the effectiveness of the game Bad News in
the study featured in “The new science of prebunking: how to inoculate against the spread of misinformation,”
available at http://blogs.biomedcentral.com/on-society/2019/10/07/the-new-science-of-prebunking-how-to-
inoculate-against-the-spread-of-misinformation/
182
McGuire, W. J. (1961). Resistance to persuasion conferred by active and passive prior refutation of same and
alternative counterarguments. Journal of Abnormal Psychology 63(2): 326–332.
183
See Bad News: https://www.getbadnews.com. Also see the study on the effectiveness of the game Bad News in
the study featured in “The new science of prebunking: how to inoculate against the spread of misinformation,”
available at http://blogs.biomedcentral.com/on-society/2019/10/07/the-new-science-of-prebunking-how-to-
inoculate-against-the-spread-of-misinformation/
184
Reporters without Borders. (2017, July 19). Russian bill is copy-and-paste of Germany’s hate speech law.
https://rsf.org/en/news/russian-bill-copy-and-paste-germanys-hate-speech-law; McHangama, J., & Fiss, J. (2019,
November 6). Germany’s online crackdowns inspire the world’s dictators. Foreign Policy.
https://foreignpolicy.com/2019/11/06/germany-online-crackdowns-inspired-the-worlds-dictators-russia-venezuela-
india/
185
See: Global Conference for Media Freedom 2020. CSOs call on states for concrete actions.
https://www.article19.org/resources/global-conference-for-media-freedom-2020-csos-call-on-states-for-concrete-
actions/
186
Montgomery, M. (2020, August). Disinformation as a wicked problem: Why we need co-regulatory frameworks.
Brookings. https://www.brookings.edu/research/disinformation-as-a-wicked-problem-why-we-need-co-regulatory-
frameworks/
187
Gao, Pengjie and Lee, Chang and Murphy, Dermot. February 12, 2019. Financing Dies in Darkness? The Impact
of Newspaper Closures on Public Finance. Journal of Financial Economics, (2020) vol. 135, no. 2, 445-467, Available
at SSRN: https://ssrn.com/abstract=3175555 or http://dx.doi.org/10.2139/ssrn.3175555
188
Gentzkow, Matthew, Jesse M. Shapiro, and Michael Sinkinson. 2011. "The Effect of Newspaper Entry and Exit on
Electoral Politics." American Economic Review, 101 (7): 2980-3018.
https://www.aeaweb.org/articles?id=10.1257/aer.101.7.2980
189
Posetti, J. and Bontcheva, K. (2020) UN-ICFJ Research Highlights Journalism’s Critical Role in Fighting COVID-
19 Disinformation. https://www.icfj.org/news/un-icfj-research-highlights-journalisms-critical-role-fighting-covid-19-
disinformation
190
Bell, E. (2019) The Guardian. “We can't fight fake news without saving local journalism.”
theguardian.com/media/2019/dec/15/we-cant-fight-fake-news-without-saving-local-journalism and Rusbridger, A.
(2019) in The Guardian, “The election in the media: against evasion and lies, good journalism is all we have.”
https://www.theguardian.com/politics/2019/dec/14/election-in-the-media-evasion-lies-good-journalism-is-all-we-
have
191
Shearer, E. (2018, December 10). Social media outpaces print newspapers in the U.S. as a news source. Pew
Research Center. https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-
the-u-s-as-a-news-source/
192
Pennington, E. (2018). Review: An introduction to funding journalism and media by the Ariadne Network, 2018. Global
Forum for Media Development. https://gfmd.info/review-an-introduction-to-funding-journalism-and-media-by-the-
ariadne-network/
193
Chua, Y.T. (2020). Philippines. Digital News Report. http://www.digitalnewsreport.org/survey/2020/philippines-
2020/
194
Internews. Our work. Retrieved October 13, 2020, from https://internews.org/work
195
IREX. Media. Retrieved October 13, 2020, from https://www.irex.org/programming-area/media
196
International Center for Journalists. Our work. ICFJ. Retrieved October 13, 2020, from https://www.icfj.org/our-
work
197
Hendrickson, C. (2019, November 12). Local journalism in crisis: Why America must revive its local newsrooms.
Brookings.
https://www.brookings.edu/research/local-journalism-in-crisis-why-america-must-revive-its-local-
newsrooms/#:~:text=Research%20Analyst%20%2D%20The%20Brookings%20Institution&text=Thousands%20of%2
0local%20newspapers%20have,wrongdoing%20and%20encouraging%20civic%20engagement
198
Hendrickson, C. (2019, November 12). Local journalism in crisis: Why America must revive its local newsrooms.
Brookings. https://www.brookings.edu/research/local-journalism-in-crisis-why-america-must-revive-its-local-
newsrooms/#:~:text=Research%20Analyst%20%2D%20The%20Brookings%20Institution&text=Thousands%20of%2
0local%20newspapers%20have,wrongdoing%20and%20encouraging%20civic%20engagement
199
See: Subramanian, S. (2017, February 15). Welcome to Veles, Macedonia, Fake news factory to the world.
Wired. https://www.wired.com/2017/02/veles-macedonia-fake-news/
200
Melford, C., & Fagan, C. (2019, May). Cutting the funding of disinformation: The ad-tech solution. GDI.
https://disinformationindex.org/wp-content/uploads/2019/05/GDI_Report_Screen_AW2.pdf
201
Fried, D, & Polyakova, A. (2019). Democratic defense against disinformation 2.0. Atlantic Council.
https://www.atlanticcouncil.org/in-depth-research-reports/report/democratic-defense-against-disinformation-2-0/
202
Kofi Annan Commission on Elections and Democracy in the Digital Age. Protecting electoral integrity in the digital
age. (2020, January). https://storage.googleapis.com/kofiannanfoundation.org/2020/05/85ef4e5d-kaf-kacedda-
report_2020_english.pdf
203
European Commission. (2018). Code of practice on disinformation. European Commission.
https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation
204
NDI. Raising voices in closing spaces. https://www.raiseavoice.net/introduction/
205
Funke, D., & Flamini, D. A guide to anti-misinformation actions around the world. Poynter Institute.
https://www.poynter.org/ifcn/anti-misinformation-actions/#sweden; La Cour, C. Governments countering
misinformation: The case of Sweden. Disinfo Portal. https://disinfoportal.org/governments-countering-disinformation-
the-case-of-sweden/; Cederberg, G. Catching Swedish phish. Defending Digital Democracy Project
206
Cederberg, G. Catching Swedish phish. Defending Digital Democracy Project.
https://www.belfercenter.org/sites/default/files/files/publication/Swedish%20Phish%20-%20final2.pdf
207
Mackintosh, E. (2019). Finland is winning the war on fake news. What it’s learned may be crucial to Western
democracy. CNN. https://www.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/.
208
Robenston, O, et al. (2019, August). A report on anti-disinformation initiatives. Oxford Technology & Elections
Commission. https://oxtec.oii.ox.ac.uk/wp-content/uploads/sites/115/2019/08/OxTEC-Anti-Disinformation-
Initiatives-1.pdf
209
For Lithuania, see: https://www.dw.com/en/lithuania-hits-back-at-russian-disinformation/a-45644080 For Taiwan,
see: https://www.dw.com/en/lithuania-hits-back-at-russian-disinformation/a-45644080
210
EUvsDisinfo. About. Retrieved October 13, 2020, from https://euvsdisinfo.eu/about/
211
Atlantic Council. About. Retrieved October 13, 2020, from https://www.atlanticcouncil.org/about/#leadership
212
UNHCR. (2017, March 3). Joint declaration on freedom of expression and “fake news,” disinformation and
propaganda. https://www.ohchr.org/Documents/Issues/Expression/JointDeclaration3March2017.doc
213
Lowcock, J. (2020, June 5). Free reach, fact-checking and platform responsibility. AdExchanger.
https://www.adexchanger.com/data-driven-thinking/free-reach-fact-checking-and-platform-responsibility/
214
Lowcock, J. (2020, June 5). Free reach, fact-checking and platform responsibility. AdExchanger.
https://www.adexchanger.com/data-driven-thinking/free-reach-fact-checking-and-platform-responsibility/
215
Tylt. (2017, May 6). Is it Facebook and Google’s responsibility to stop fake news? https://thetylt.com/culture/is-it-
facebook-and-google-s-responsibility-to-stop-fake-news
216
Gorwa, R. (2019). What is platform governance? Information, Communication & Society 22(6), 854-871.
https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1573914
217
Gorwa, R. (2019). What is platform governance? Information, Communication & Society 22(6), 854-871.
https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1573914
218
Gorwa, R. (2019). What is platform governance? Information, Communication & Society 22(6), 854-871.
https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1573914
219
Gorwa, R. (2019). What is platform governance? Information, Communication & Society 22(6), 854-871.
https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1573914
220
How Facebook’s fact-checking program works. Facebook. Retrieved October 13, 2020, from
https://www.facebook.com/journalismproject/programs/third-party-fact-checking/how-it-works
221
Clegg, N. (2020, May 6). Welcoming the Oversight Board. Facebook.
https://about.fb.com/news/2020/05/welcoming-the-oversight-board/
222
See: Ghosh, D. (2019, October 16). Facebook’s Oversight Board is not enough. Harvard Business Review,
https://hbr.org/2019/10/facebooks-oversight-board-is-not-enough. In summary, the articles asserts that Facebook
exerts enormous influence over the creation and spread of media content in the United States and other locations around
the world. The company’s recent creation of an Oversight Board to independently adjudicate content disputes is an attempt
to solve real problems: the spread of misinformation, voter suppression, and so on. The author argues that the board itself is
not enough, however, because the company’s business model is predicated on the very techniques—audience segmentation,
targeting, algorithmic optimization of feeds for more eyeballs—that create the problems to begin with.
223
Peters, J. (2020, May 11). Twitter introducing new labels for tweets with misleading COVID-19 information. The Verge.
https://www.theverge.com/2020/5/11/21254733/twitter-covid-19-misleading-information-label-warnings-
misinformation
224
Fernandez, E. (2020, June 11). Twitter now encourages users to read articles before retweeting. Forbes.
https://www.forbes.com/sites/fernandezelizabeth/2020/06/11/twitter-now-encourages-users-to-read-articles-
before-retweeting/#e83c4236bab2
225
O’Kane, C. (2020, October 12). Facebook reverses policy and bans Holocaust denial on its platforms. CBS
News. https://www.cbsnews.com/news/facebook-bans-holocaust-denial-platforms/
226
Clement, J. (2020, November 24). Facebook: number of monthly active users worldwide 2008-2020. Statista.
https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/
227
For a resource on current media research and indices, see http://akademie.dw.de/navigator
228
Cheung, P., & Sands, J. (2020, October 29). More research and resilient journalists are needed to combat the
misinformation disorder. Knight Foundation. https://knightfoundation.org/articles/more-research-and-resilient-
journalists-are-needed-to-combat-the-misinformation-disorder/
229
For example, see Ortega, A. (2017, December 9). Disinformation campaigns and public mistrust. The Globalist.
https://www.theglobalist.com/internet-social-media-fake-news-journalism/
230
See Internews. (2020, September 10). Global consortium launches three-year effort to strengthen Internet freedom in
50 countries. https://internews.org/updates/global-consortium-launches-three-year-effort-strengthen-internet-
freedom-50-countries
231
Woolley, S., & Howard, P. (2017). Computational propaganda worldwide: Executive summary. Computational
Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Casestudies-
ExecutiveSummary.pdf
232
https://www.techopedia.com/definition/16897/content-farm
233
Bradshaw, S., & Howard, P. N. (2018). Challenging truth and trust: A global inventory of organized social media
manipulation. Computational Propaganda Research Project. http://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2018/07/ct2018.pdf
234
See: https://www.intechopen.com/online-first/debunking-as-a-method-of-uncovering-disinformation-and-fake-
news
235
Engler, A. (2019, November 14). Fighting deepfakes when detection fails. Brookings.
https://www.brookings.edu/research/fighting-deepfakes-when-detection-fails/
236
Wardle, C., & Derakhshan, H. (2017). Information Disorder: Towards an Interdisciplinary framework for research and
policy making. Council of Europe report. https://rm.coe.int/information-disorder-report-version-august-
2018/16808c9c77
237
Wardle, C., & Derakhshan, H. (2017). Information Disorder: Towards an Interdisciplinary framework for research and
policy making. Council of Europe report. https://rm.coe.int/information-disorder-report-version-august-
2018/16808c9c77
238
Duignan, B. Gaslighting. Encyclopedia entry. Britannica. https://www.britannica.com/topic/gaslighting
239
Reppel, L, & Shein, E. (2019). Disinformation campaigns and hate speech: Exploring the relationship and
programming interventions. International Foundation for Electoral Systems.
240
Hartz, J. (2018). Supporting information integrity and civil political discourse. NDI.
241
https://www.merriam-webster.com/dictionary/microtarget
242
Stewart, P. (2019, June 20). Interview: Behind the Internet’s quiet revolution. Human Rights Watch.
https://www.hrw.org/news/2019/06/20/interview-behind-internets-quiet-revolution#
243
See Bad News: https://www.getbadnews.com. Also see the study on the effectiveness of the game Bad News in
the study featured in “The new science of prebunking: how to inoculate against the spread of misinformation,”
available at http://blogs.biomedcentral.com/on-society/2019/10/07/the-new-science-of-prebunking-how-to-
inoculate-against-the-spread-of-misinformation/
244
Fried, D, & Polyakova, A. (2019). Democratic defense against disinformation 2.0. Atlantic Council.
https://www.atlanticcouncil.org/in-depth-research-reports/report/democratic-defense-against-disinformation-2-0/
245
Donovan, J. (2020, September 28). How civil society can combat misinformation and hate speech without making it
worse. Medium. https://medium.com/political-pandemonium-2020/how-civil-society-can-combat-misinformation-and-
hate-speech-without-making-it-worse-887a16b8b9b6
246
Flueckiger, S. (2018, October 10). How pop-up newsroom brings competitors together. World News Publishing
Focus by WAN. https://blog.wan-ifra.org/2018/10/19/how-pop-up-newsroom-brings-competitors-together
247
International Center for Journalists. TruthBuzz: Making the truth go viral. Retrieved September 15, 2020, from
https://www.icfj.org/our-work/truthbuzz-making-truth-go-viral
248
International Center for Journalists. TruthBuzz: Making the truth go viral. Retrieved September 15, 2020, from
https://www.icfj.org/our-work/truthbuzz-making-truth-go-viral
249
What’s Crap on WhatsApp? Retrieved September 15, 2020, from https://www.whatscrap.africa/
250
GNDEM. (2018, December 27). ISFED monitors social media around 2018 presidential election.
https://gndem.org/stories/isfed-monitors-social-media-around-2018-presidential-election/
251
Martin-Rozumilowicz, B., & Rast’o Kuzel. (2019). Social media, disinformation and electoral integrity. IFES.
https://www.ifes.org/sites/default/files/ifes_working_paper_social_media_disinformation_and_electoral_integrity_a
ugust_2019_0.pdf
252
NDI. Disinformation and electoral integrity: A guidance document for NDI elections programs.
https://www.ndi.org/publications/disinformation-and-electoral-integrity-guidance-document-ndi-elections-programs
253
European Instrument for Democracy and Human Rights. Guide for civil society on monitoring social media
during elections. https://democracy-reporting.org/wp-content/uploads/2019/10/social-media-DEF.pdf
254
EUvsDisinfo. About. Retrieved October 13, 2020, from https://euvsdisinfo.eu/about/
255
Atlantic Council. About. Retrieved October 13, 2020, from https://www.atlanticcouncil.org/about/#leadership
256
Funke, D., & Flamini, D. A guide to anti-misinformation actions around the world. Poynter Institute.
https://www.poynter.org/ifcn/anti-misinformation-actions/#sweden
257
Robinson, O., Coleman, A., & Sardarizadeh, S. (2019, August 22). A report of anti-disinformation initiatives.
OxTEC. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/08/OxTEC-Anti-Disinformation-
Initiatives.pdf
258
GEC Counter-Disinformation Dispatches. (2020). A counter-disinformation system that works. Content Commons.
Retrieved October 14, 2020, from https://commons.america.gov/article?id=44&site=content.america.gov
259
Zurabashvili, T. (2018, September 18). Russia’s disinformation activities and counter-measures: Lessons from Georgia.
European Values. https://www.kremlinwatch.eu/userfiles/russia-s-disinformation-activities-and-counter-measures-
lessons-from-georgia.pdf
260
Internews. Our work. Retrieved October 13, 2020, from https://internews.org/work
261
IREX. Media. Retrieved October 13, 2020, from https://www.irex.org/programming-area/media
262
International Center for Journalists. Our work. ICFJ. Retrieved October 13, 2020, from https://www.icfj.org/our-
work
263
IREX. Learn to discern (L2D). Retrieved September 29, 2020, from https://www.irex.org/project/learn-discern-
l2d-media-literacy-training
264
NewsWise. The Guardian. Retrieved September 29, 2020, from https://www.theguardian.com/newswise
265
The ComProp Navigator. Oxford Internet Institute. Retrieved September 15, 2020, from
https://navigator.oii.ox.ac.uk/
266
The ComProp Navigator. Oxford Internet Institute. Retrieved September 15, 2020, from
https://navigator.oii.ox.ac.uk/
267
Digital Forensic Research Lab. Retrieved September 27, 2020, from
https://www.atlanticcouncil.org/programs/digital-forensic-research-lab/
268
Internews. (2019, February 21). Civil society tracks trolls and fakes, prompts Facebook action in Moldova.
https://internews.org/story/civil-society-tracks-trolls-and-fakes-prompts-facebook-action-moldova