Impact of Political Deepfakes in the context of Elections:
This article will examine two main risks deepfakes pose to elections: their
potential to manipulate voter opinions and their effect on overall trust in
elections and democratic institutions.1 Firstly, deepfakes might influence
voter preferences by misrepresenting or undermining a politician’s (or
political party’s) position on specific issues or by targeting their
credibility. With the increasing length of pre-election periods in Australia
(and other democracies),2 the release of a deepfake during this time or
just before election day can make it extremely difficult for politicians to
counter it before votes are cast. For instance, a deepfake showing a
politician known for a strong anti-drug stance using illegal drugs could
be very impactful and hard to refute.3 Such deepfakes might be created
as part of a candidate’s official campaign, by foreign actors seeking to
influence the election, or even by individuals unrelated to the political
sphere.
Although there is no evidence that deepfakes have yet influenced an
Australian election, there have been instances where genuine but
compromising video footage led federal candidates to withdraw from
races.4 In the United States, however, manipulated footage has been used
politically, such as the Republican Party's alteration of real video clips of
House Speaker Nancy Pelosi to make her appear inebriated by slowing
down her speech.5 Similar tactics were used against President Joe Biden
during the 2020 Presidential election, with experts warning that
1
“Secondary threats could include undermining diplomacy and jeopardising national security. These threats
can be viewed as subsidiary to the primary threats identified above in that they rely on either convincing a
particular actor a fake video is real or in eroding public trust in video content, for example, fake news about
nuclear attacks could cause general panic and reduce trust in future warnings.”
2
“Stephen Mills and Martin Drum, ‘Surge in Pre-poll Numbers at 2019 Federal Election Changes the
Relationship between Voters and Parties’, The Conversation (online, 19 August 2019) . This trend has increased
in recent elections: Damon Muller, ‘Trends in Early Voting in Federal Elections’, Parliament of Australia (Web
Page, 8 May 2019).”
3
“Further possibilities could include footage of candidates withdrawing from a race and endorsing another
candidate, a politician committing an offence, accepting a bribe, or outlining a fake policy position. Given the
ease of use of the technology, users are limited only by their creativity.”
4
“Josh Bavas, ‘One Nation Election Candidate Steve Dickson Resigns over Strip Club Videos’, ABC News (online,
30 April 2019).”
5
“Hannah Denham, ‘Another Fake Video of Pelosi Goes Viral on Facebook’, The Washington Post (online, 3
August 2020).”
deepfakes and similar technologies might be employed during periods of
public uncertainty.6 They anticipated that such a moment would arise
after the election, especially with Trump suggesting he might not accept
a loss.7 This scenario partially played out as Trump denounced the
election results as ‘fake news’ and his supporters stormed the Capitol, an
act condemned as terrorism by US security agencies. 8 However, no
deepfake videos were detected during this event, though the possibility
of a fake video showing then President-elect Biden conceding defeat was
speculated. Despite public institutions, inquiries, and courts declaring
the fraud claims false, Trump and the Republican Party continue to
promote these claims.
I. Altering Voter Preference:
The exact number of voters who could be misled by a deepfake is not
clear. However, targeting marginal seats during an election could make a
significant impact, as influencing even 100 voters in such seats could be
consequential.9 A 2020 study indicated that about 15% of participants in
a controlled experiment believed a deepfake of Obama was genuine. 10
Although not every viewer who believes a deepfake will necessarily
change their vote (due in part to strong party allegiance), 11 the potential
impact should not be dismissed. Deepfake videos could also affect
elections indirectly by forcing candidates to withdraw or damaging a
candidate’s or party’s fundraising efforts. Additionally, while some
research suggests disinformation has minimal direct impact on
6
“Clint Watts and Tim Hwang, ‘Deepfakes Are Coming for American Democracy: Here’s How We Can Prepare’,
The Washington Post (online, 10 September 2020).”
7
“‘Donald Trump Refuses to Commit to Peaceful Transfer of Power if He Loses US Election’, ABC News (online,
24 September 2020).”
8
“‘FBI Chief Calls Capitol Attack Domestic Terrorism and Rejects Trump’s Fraud Claims’, The Guardian (online,
11 June 2021).”
9
“For example, in the 2020 Northern Territory election 11/25 seats would have changed hands if 100 voters
had been swayed by a deepfake: ‘NT Summary of Two Candidate Preferred Votes by Division’, Northern
Territory Electoral Commission (Web Page, 2020). The average turnout for each division was 4,235 voters, so
swaying ~2.5% of voters could have altered 11/25 contests.”
10
“Cristian Vaccari and Andrew Chadwick, ‘Deepfakes and Disinformation: Exploring the Impact of Synthetic
Political Video on Deception, Uncertainty, and Trust in News’ (2020) 6(1) Social Media + Society 1, 6.”
11
“Spencer McKay and Chris Tenove, ‘Disinformation as a Threat to Deliberative Democracy’ (2020) (July)
Political Research Quarterly 1, 1.”
elections,12 there is evidence that disinformation, including deepfakes,
has affected Australian elections. For instance, the Australian Labor
Party acknowledged the impact of the misleading ‘death tax’ ads during
its 2019 campaign, although they conceded that this alone did not
determine the election outcome.13
Moreover, while deepfakes might not directly change which party
secures a majority, they could influence individual electoral contests.
Deepfakes have the potential to target specific politicians, such as by
creating videos showing them engaging in illegal activities. At the federal
level, Australia is particularly vulnerable to such targeted attacks, with
36 lower house seats held by margins of less than 5%, 84 by less than
10%, and 129 by less than 15%.14 The greater concern may be the
secondary threat of deepfakes, as the 2020 study revealed that only
50.8% of participants were not deceived by the deepfake video. 15 The rest
could not discern if the video was real or fake. This uncertainty
underscores the broader threat deepfakes pose: diminishing trust in
video evidence and news, which can erode public confidence in
democratic processes.
II. Decreasing Trust in Democracy and Democratic Institutions:
Increasingly, Australians are relying on digital platforms like Facebook
for news consumption.16 This trend aligns with a global shift toward
easily accessible and shareable content,17 which facilitates the
widespread distribution of fake news. This move to digital media has
been accompanied by a decline in trust in politicians and politics in
12
Id.
13
See, eg, Craig Emerson and Jay Weatherill, ‘Review of Labor’s 2019 Federal Election Campaign’ (Report, 7
November 2019) 79–80.
14
“Corresponding to 24%, 56% and 85% of lower house seats accordingly. Analysis conducted on AEC data from
the recent 2019 federal election and 2020 Eden-Monaro by-election: Australian Electoral Commission, ‘Seat
Summary’, Tally Room 2019 Federal Election (Web Page, 2019) (results on file with author).”
15
“Cristian Vaccari and Andrew Chadwick, ‘Deepfakes and Disinformation: Exploring the Impact of Synthetic
Political Video on Deception, Uncertainty, and Trust in News’ (2020) 6(1) Social Media + Society 1, 6.”
16
“Australian Competition and Consumer Commission, ‘Digital Platforms Inquiry’ (Final Report, June 2019) ch
1; See also Christopher Hughes, ‘News Sources in Australia in 2021’, Statista (online, 12 July 2021).”
17
“Katie Elson Anderson, ‘Getting Acquainted with Social Networks and Apps: Combating Fake News on Social
Media’ (2018) 35(3) Library Hi Tech News 1.”
general.18 Political deepfakes could further undermine this trust by
allowing candidates to dismiss genuine footage as fake news, thus fueling
claims of being set up.19 This potential to degrade trust in the political
system is a significant concern for political scientists, as threats to
individual elections can ultimately threaten democratic integrity. 20 More
broadly, the rise of disinformation could fundamentally damage the
concept of 'truth' in elections, leading to severe consequences. For
instance, disproven claims of electoral fraud in the US have led to a wave
of restrictive electoral reforms aimed at 'safeguarding' future elections, 21
which have been deemed constitutional by the US Supreme Court. 22
These changes, combined with gerrymandering, may influence future
election outcomes regardless of voters' choices on election day.
Deepfakes could worsen these issues by fostering distrust among voters,
who may struggle to discern whom or what to trust, thereby enabling
lawmakers to enact anti-democratic laws under the guise of election
protection. The accessibility of deepfake technology is particularly
alarming, as it allows anyone with a computer or smartphone to create
and disseminate these videos.23 Once uploaded, deepfakes can be quickly
re-shared, making them nearly impossible to remove, even when proven
false. For example, the widely discredited video "Plandemic" was
persistently re-uploaded to alternative sites after being removed from
Facebook and YouTube, with some commentators suggesting that the
effort to suppress it only increased its visibility.24
18
“Simon Tormey, ‘The Contemporary Crisis of Representative Democracy’ (Papers on Parliament No 66,
Parliament of Australia, October 2016) 90; Russell J Dalton, Democratic Challenges, Democratic Choices: The
Erosion of Political Support in Advanced Industrial Democracies (Oxford University Press, 2004).”
19
“See, eg, comments made by then President Donald Trump during the 2020 election: David Smith, ‘Wounded
by Media Scrutiny, Trump Turned a Briefing into a Presidential Tantrum’, The Guardian (online, 14 April 2020).”
20
“Spencer McKay and Chris Tenove, ‘Disinformation as a Threat to Deliberative Democracy’ (2020) (July)
Political Research Quarterly 1, 1.”
21
“Sam Levine, ‘The Republicans’ Staggering Effort to Attack Voting Rights in Biden’s First 100 Days’, The
Guardian (online, 28 April 2021).”
22
“Brnovich v Democratic National Committee, 594 US ___ (2021). For commentary: see, eg, Lauren Fedor, ‘US
Supreme Court Upholds Arizona Law in Voting Rights Challenge’, Financial Times (online, 2 July 2021).”
23
“Best results require a mid-high end graphics card: Timothy B Lee, ‘I Created My Own Deepfake: It Took Two
Weeks and Cost $552’, ARS Technica (online, 16 December 2019).”
24
“Andrea Bellemare, Katie Nicholson and Jason Ho, ‘How a Debunked COVID-19 Video Kept Spreading after
Facebook and YouTube Took It Down’, CBC News (online, 21 May 2020).”
Three main categories of harm are associated with deepfakes: (1) Harms
to viewers/listeners (i.e., those who watch or listen to deepfakes); (2)
Harms to subjects (i.e., the individuals targeted by deepfakes and those
connected to them); and (3) Harms to social institutions (i.e., the broader
system in which deepfakes operate). In the context of elections, voters
are the primary viewers/listeners, candidates and campaigns are the
main subjects, and the democratic election system is the social institution
at risk. Amplification plays a critical role in the extent of these harms,
although it is not a harm in itself but rather a factor that exacerbates
other harms. In elections, broad distribution of deepfakes compounds the
harm to voters, candidates, campaigns, and the overall integrity of
elections. Consequently, amplification (and the role of platforms in
facilitating it) is a central reason deepfakes have attracted significant
public concern and pose a potential threat to electoral integrity.
We can understand establish the harm caused by political deepfakes in
the context of elections through 8 illustrations:
S. Harms Deepfake Illustration
No.
1. Deception “Scenario 1. A deepfake
video is used to
exaggerate the military
record of a candidate.”
“Scenario 2. A deepfake
video of a candidate
falsely shows the
candidate making
disparaging and hateful
comments about the
opponent’s base.”
“Scenario 3. A video of
deepfaked testimonials is
used to show individuals
falsely claiming they had
an affair with the
candidate.”
“Scenario 4. A targeted
email campaign deceives
recipients about their
correct polling location
and uses a deepfake video
of a respected public
figure to lend credibility
to the message.”
“Scenario 5. A deepfake
audio clip of a candidate
falsely represents the
candidate saying
disparaging things about
voters.”
“Scenario 6. A deepfake
image is used to threaten
to deceive an individual
voter’s friends and family
about their participation
in pornography.”
“Scenario 7. A deepfake
audio clip of an exchange
between campaign staff
and a debate moderator
falsely suggest a
candidate cheated in a
public debate.”
“Scenario 8. A deepfake
parody video of a
candidate is taken out of
context on social media
and some viewers no
longer know it’s a
parody.”
2. Intimidation “Scenario 6. Individuals
are threatened via a
spoofed text message that
a deepfake pornographic
image of them will be
shared publicly and with
friends and family if they
vote on election day.”
3. Reputation “Scenario 2. A deepfake
video of a candidate
shows them making
disparaging and hateful
comments about the
opponent’s base,
negatively impacting their
reputation amongst that
group.”
“Scenario 3. A video is
released showing
deepfaked testimonials of
individuals claiming they
had an affair with the
candidate, tarnishing the
candidate’s reputation
amongst those who are
morally opposed.”
“Scenario 5. A deepfake
audio clip of a candidate
saying disparaging things
about a politically
important group of voters
hurts the candidate’s
reputation amongst that
group.”
“Scenario 6. A deepfake
image is used to threaten
to deceive an individual
voter’s friends and family
about their participation
in pornography, thus
tarnishing the voter’s
reputation.”
“Scenario 7. A deepfake
audio clip of an exchange
between campaign staff
and a debate moderator
suggest a candidate
cheated in a public
debate.”
4. Misattribution “Scenario 8. A deepfake
parody video exaggerates
the views of a candidate
and when taken out of
context creates
misattributed negative
credit for those
exaggerated views.”
5. Undermining Trust “Scenario 6. Individuals
threatened with a
deepfake pornographic
image are deterred from
voting, but it’s unclear
how many people were
affected and therefore
how it may have impacted
the outcome.”
“Scenario 7. A fabricated
exchange between
campaign staff and a
debate moderator suggest
a candidate cheated in a
public debate, calling into
question the legitimacy of
the political process.”
Harms to Voters
Deception
Deception is the primary harm evident across the scenarios. In many of
the deepfakes described in our eight scenarios, the aim is to mislead
voters into believing false information about a candidate. For instance, in
scenario 2, a deepfake falsely portrays a candidate making offensive and
hateful remarks, while in scenario 3, a video features fabricated
testimonials claiming to have witnessed the candidate engaging in
inappropriate behavior. However, deception isn't limited to targeting
candidates alone. In scenario 4, the deception involves misleading
information about polling locations provided by a trusted leader, and in
scenario 6, a compromising deepfake targets an individual voter by
falsely threatening to reveal damaging information about their behavior
to their friends, family, and the public. Crucially, most of these scenarios
involve deliberate deception, as creating a deepfake is a purposeful act.
The concept of intent is significant here because it differentiates between
misinformation (which lacks intent to deceive) and disinformation (which
involves deliberate intent to deceive).25
In scenarios 2 through 7, the deepfakes involve disinformation, as they
were deliberately created to deceive voters or their family and friends (as
in scenario 6). In contrast, scenario 1 features a synthesized depiction of
a battle scene where the creators might have genuinely believed they
were accurately reproducing events. Consequently, this video may not
straightforwardly be classified as disinformation. Moreover, if the video
accurately reflects historical events, it might not qualify as
misinformation either. The issue with the video in scenario 1 is that
viewers were not informed that some footage was synthesized, leading
them to potentially assume the entire video was real. Awareness of the
interpretative nature of such depictions can alter viewers' perceptions.
Intent is also crucial in the context of deepfake parodies, as shown in
scenario 8. Parodies are generally created for comedic purposes, and
their creators do not intend to deceive viewers. 26 However, in scenario 8,
the parody is removed from its original context and shared on social
media without any indication that it is a parody. As a result, some viewers
may perceive it as real. Whether this constitutes misinformation or
disinformation depends on the intent of the person who shared the
deepfake in the new context. If the sharer did not intend to deceive but
viewers were misled, it would be classified as misinformation. If the
sharer intended to deceive viewers, it would be considered
disinformation.
Deception is particularly harmful because it undermines individuals'
ability to make informed decisions based on accurate information. In an
election, intentionally spreading false information about a candidate
manipulates voters and influences their choices based on misinformation.
Even if a voter would have selected the same candidate regardless of the
25
“Wardle C and Derakhshan H (2017) Information Disorder: Toward an interdisciplinary framework for
research and policy making. Council of Europe.”
“Tandoc EC Jr., Zheng WL, and Ling R (2018) Defining ‘Fake News’: A
26
typology of scholarly definitions. Digital Journalism 6 (2): 137–53.”
deception, their decision-making autonomy is compromised because it
was made based on inaccurate information. While false information
generally affects elections, deepfakes are especially concerning because
they enhance the perceived authenticity and credibility of their
falsehoods.
When deepfakes are widely disseminated, the harm is magnified. The
difference between individual and widespread deception lies in the
number of people affected. The rapid and extensive amplification enabled
by digital and social media platforms makes it more challenging to
mitigate the damage.
Intimidation
To intimidate means to instil fear in a person, often through a threat of
injury or harm. Intimidation involves creating a sense of fear through
such threats. The ethical concerns surrounding deepfakes have already
surfaced, particularly in relation to deepfake pornography, which initially
brought deepfakes into the public eye. Research has shown that deepfake
pornography is predominantly gendered and sexualized, with much of the
actual harm being inflicted on women.27
Scenario 6 demonstrates how deepfakes can be used to intimidate voters
in an election context. In this scenario, a threat is made to release a
defamatory pornographic deepfake of the voter if they cast their vote.
The intent is to generate fear that the voter’s family, friends, and others
might see and believe the deepfake. Even if the deepfake is not believed
by those who view it, the voter may still suffer a significant loss of
dignity, and experience feelings of humiliation or shame.
The broader the spread of the deepfake, the greater the potential for
humiliation. Intimidation is harmful on its own, and in the election
context, if successful, it undermines a fundamental democratic right: the
right to vote. Although in scenario 6 the amplification of the deepfake is
somewhat limited by the use of text messaging and its personalized
“Paris B and Donovan J (2019) Deepfakes and Cheap Fakes. Data &
27
Society.”
nature, future advancements may make it easier to personalize
deepfakes. This could result in a different form of amplification, where
personalized deepfakes are distributed through narrow, nonpublic
channels (e.g., private chat) rather than broad public or social media
channels, leading to widespread, repeated intimidation (i.e., one-to-few
amplification occurring many times).
Harms to Subjects
In the context of elections, deepfakes primarily target campaigns and
candidates. However, these videos may also feature auxiliary subjects—
individuals who are not the main targets but are included in the deepfake
for added realism or to support the intended narrative. For example, in
scenario 1, soldiers depicted alongside a candidate are auxiliary subjects
included to enhance the reenactment's authenticity. Similarly, in scenario
7, the debate moderator (a journalist) and a campaign staff member are
auxiliary subjects who are depicted as being involved in or complicit with
alleged cheating by the candidate.
Auxiliary subjects can suffer harm similar to that experienced by primary
subjects. The primary harms include damage to their reputation and
misattribution of actions or intentions.
Reputational Harm
In scenario 6, deepfakes are employed to intimidate voters by
threatening damage to their reputations. Similarly, reputational harm is a
key issue in four other scenarios, but here the focus is on candidates. In
scenarios 2 and 5, deepfake videos show a candidate making statements
that undermine their perceived authenticity; in scenario 3, fake
testimonials suggest that a candidate is hypocritical and unsavory; and in
scenario 7, a fabricated incident falsely implies that a candidate cheated
in a public debate. In these cases, the deepfake is not used to intimidate
through a threat but rather to directly cause harm through publication.
The harm is inherent in the release of the video or audio itself.
Legally, such deepfakes might be considered defamation, which involves
damaging someone's reputation. However, U.S. courts have been hesitant
to intervene in campaign speech, and there is significant debate about
the constitutionality of laws prohibiting false campaign speech. 28 This
legal ambiguity has made the regulation of false campaign speech
complex, with false claims generally not being prohibited. Despite this,
reputation is a crucial asset in campaigns, where shaping public
perceptions of a candidate’s character and actions is key. Competitors
aim to undermine their rivals' reputations, making the issue of false
campaign speech particularly contentious.
While legal restrictions on false campaign speech are complex and often
not enforced, the ethical implications of deepfakes are clear. The harm
caused by false speech in deepfakes is undeniable, amplified by the broad
reach and rapid dissemination of such content, making it challenging to
address. Deepfakes released in the final stages of an election can be
especially difficult to counteract effectively, while those released earlier
in the campaign might still be challenging to address but allow some
time to debunk their falsehoods.
Misattribution
Creating a deepfake involves using a subject’s image and/or voice, either
by altering existing materials or by synthesizing new ones. This process
raises concerns about violating ownership rights. Beyond the
reputational damage caused by such use, there is an additional harm
associated with using and distributing someone’s likeness without
permission. The legal framework surrounding this issue is complex and
varies by jurisdiction.29 Public figures have rights to their image,
primarily for commercial purposes, allowing them to benefit financially
from their likeness.30 Although candidates for public office also have
28
“Hasen RL (2013) A constitutional right to lie in campaigns and
elections. Montana Law Review 74.”
29
“Rothman JE (2018) The Right of Publicity: Privacy Reimagined for a
Public World. Harvard University Press.”
30
“Berkman, H (1976) The Right of Publicity--Protection for Public
Figures and Celebrities. Brooklyn Law Review 42, no. 3: 527-557.”
publicity rights, it remains uncertain how a legal claim for harm due to
deepfake usage would be handled, especially in the context of non-
commercial harm.31 Courts are generally reluctant to regulate false
campaign speech.
The additional harm of using someone’s image or likeness in a deepfake,
beyond reputational damage, can be understood as “persona plagiarism.”
This concept is akin to plagiarism but focuses on the source rather than
the content. Plagiarism involves using someone else's language or ideas
without permission and presenting them as one's own (2). Legal scholars
define plagiarism as “non-consensual fraudulent copying”. 32 In
plagiarism, one misattributes another’s text to oneself, whereas in
deepfakes, one manipulates another’s image or voice to create something
new and falsely attributes that creation to the original subject.
Essentially, plagiarism involves misattributing the authorship of words,
while persona plagiarism involves misattributing the expression created
using someone’s persona.
Credit plays a role here: in plagiarism, the original author is denied
credit; in persona plagiarism, credit is either falsely given to the subject
of the deepfake or misattributed. The typical response to plagiarism is
social sanction against the plagiarist once detected. 33 In deepfakes, the
lack of attribution to the creator hinders similar normative sanctioning.
Parodies also involve misattribution, but in a different way. The creator of
a deepfake parody uses a candidate’s likeness and voice to add
exaggerated or embarrassing features. If viewers recognize it as a
parody, no deceptive attribution occurs, and thus no persona plagiarism.
However, if the parody is moved to a context where it is not identified as
such, and viewers believe the exaggerated characteristics are real, it
becomes a form of persona plagiarism. Identifying the responsible party
is challenging since the parody creator did not intend to deceive, and
31
“Spivak R (2019) “Deepfakes”: The newest way to commit one of the
oldest crimes. Georgetown Law Technology Review 339.”
32
“Posner, RA (2007). The Little Book of Plagiarism. Pantheon Books.”
33
“Posner, RA (2007). The Little Book of Plagiarism. Pantheon Books.”
those who moved the parody might not have fully understood its
implications.
The amplification of misattribution in deepfakes can be compared to
plagiarism. If a plagiarized document is seen by only a few (e.g., a
student’s paper for a teacher), the harm is contained. However, if the
plagiarized work is published widely (e.g., in a major newspaper), the
harm increases due to the larger audience. Similarly, a deepfake shared
privately might cause limited harm, but a deepfake widely distributed on
social media can significantly amplify misattribution, causing extensive
reputational damage.
Harms to Social Institutions
Decreased Faith in Elections
In their comprehensive analysis of deepfakes, Chesney and Citron (2019)
identify nine potential societal harms, including the manipulation of
elections. They also highlight the risks of distorting democratic discourse
and undermining trust in institutions—issues of significant concern in the
context of democratic elections. The National Democratic Institute (NDI)
explicitly addresses these concerns, stating: “Deliberately blurred lines
between truth and fiction amplify voter confusion and devalue fact-based
political debate … Manipulation of voter and civic information dampens
participation and degrades trust in election management bodies.”
As previously noted, the legal framework for campaign speech is largely
inadequate in tackling the deepfake threat due to its emphasis on
protecting First Amendment rights. This is particularly true for deepfake
parodies, which are generally defended as free expression. The law often
prioritizes freedom of speech over preventing false campaign speech to
avoid politicizing regulation.34 Despite these legal protections, deepfakes
still pose a significant threat by eroding trust in electoral outcomes. They
“Rowbottom J (2012) Lies, Manipulation and Elections—Controlling
34
False Campaign Statements. Oxford Journal Of Legal Studies 32(3): 508–
535.”
contribute to distorting campaign results and undermining public
confidence in those results.
Scenarios 6 and 7 demonstrate specific ways in which deepfakes can
manipulate voters and reduce participation, thus damaging trust in the
electoral process. Scenario 6 involves using deepfakes to coerce voters
into abstaining by threatening to release fabricated pornographic
images. Even if the exact number of deterred voters is unknown, the
awareness of such tactics can erode trust in election integrity. 35 Scenario
7 features a deepfake suggesting that a candidate cheated during a
public debate, thereby questioning the legitimacy of the political process.
These scenarios highlight how deepfakes can disrupt electoral processes.
However, all the harms illustrated in other scenarios can also lead to
diminished trust in electoral processes. For example, deception
undermines individual decision-making autonomy and, when revealed,
can erode trust in the ability of voters to make informed choices.
Reputational harm and misattribution distort voters' understanding of
candidates, and even if an individual recognizes the manipulations, they
might believe others do not, further degrading trust in the collective
decision-making process. This introduction of uncertainty about the well-
informedness of peers contributes to a broader erosion of trust in
democratic processes.
35
“Daniels GR (2009) Voter Deception. Indiana Law Review 43.”
PENALIZING DEEPFAKES IS A REASONABLE RESTRICTION UNDER ARTICLE 19(2):
I. Interests of Sovereignty and Integrity of India:
Foreign electoral interventions involve both covert and overt
efforts by powerful nations to influence elections in other
countries. Although electoral intervention is just one method
among many for achieving regime change abroad, it has
garnered significant attention in recent years. Research on the
impact of these interventions has expanded since 2011, leading
to a deeper understanding of their effects. Historically,
interventions in foreign elections are not new; from 1946 to
2000, the United States and the USSR/Russia engaged in 117
such interventions, impacting approximately one in nine
competitive national elections during this period. Of these
interventions, the United States was responsible for 81, while
Russia (including the former Soviet Union) accounted for 36. 36
These interventions included tactics such as funding preferred
campaigns and threatening to withhold foreign aid if the
opposing side won. Notably, 68% of these interventions were
covert rather than overt.
A 2019 study by Lührmann et al. from the Varieties of
Democracy Institute in Sweden found that in 2018, the most
significant interventions involving false information were carried
out by China in Taiwan and by Russia in Latvia. Other notable
interventions were in Bahrain, Qatar, and Hungary, while
Trinidad and Tobago, Switzerland, and Uruguay experienced the
lowest levels of interference.37
Research categorizes foreign electoral interventions into two
types: ‘partisan intervention,’ where a foreign power supports a
specific side, and ‘process intervention,’ where the focus is on
supporting the democratic processes regardless of the outcome.
These interventions can be motivated by global interests or self-
serving objectives and may involve nation-states, international
organizations, non-governmental organizations, or individuals.38
In his study “Effects of Great Power Electoral Interventions on
Election Results,” Dov H. Leven argues that partisan electoral
interventions by great powers tend to significantly benefit the
36
“Levin, Dov H. (June 2016). “When the Great Power Gets a Vote: The Effects of Great Power Electoral
Interventions on Election Results”. International Studies Quarterly. 60 (2): 189–202. doi:10.1093/isq/sqv016.
For example, the U.S. and the USSR/Russia have intervened in one of every nine competitive national level
executive elections between 1946 and 2000. https://academic.oup.com/isq/article/60/2/189/1750842
Retrieved on January 21.”
37
“Democracy Facing Global Challenges, V-DEM ANNUAL DEMOCRACY REPORT 2019, p.36 (PDF) (Report) 14
May 2019 accessed on January 24, 2020.”
38
Id.
candidate or party receiving aid. The study identifies two critical
conditions for such interventions:
1. Motive: A great power must perceive its interests as being
endangered by a particular candidate or party, making
conventional methods of incentives and disincentives seem
ineffective or too costly.
2. Opportunity: There must be a significant domestic actor
within the target country willing to cooperate with the
intervention. This cooperation is crucial for providing local
knowledge and information on how best to influence the
electorate.
Without these conditions, a great power is unlikely to engage in
electoral interventions. The study also suggests that
interventions are unlikely if the client is expected to lose the
election regardless of the intervention. Despite these insights,
there is a consensus among scholars that partisan electoral
interventions will likely remain a tool for great powers seeking
to influence the leadership of other states.39
Recent technological advancements have transformed the
methods used to influence elections. Techniques developed in
marketing, such as analyzing vast amounts of data to predict
consumer behavior, have been adapted for electoral purposes.
This involves the use of:
- Big Data Analytics: To profile voters and understand their
behavior and preferences.
- Artificial Intelligence (AI): For processing and analyzing data
on a massive scale.
39
“When the Great Power Gets a Vote: The Effects of Great Power Electoral Interventions on Election Results
Dov H. Levin https://academic.oup.com/isq/article/60/2/189/1750842 accessed on January 24, 2020.”
- Machine Learning (ML): To predict voter behavior and
optimize strategies.
- Intelligent Algorithms and Autonomous Bots: To engage with
voters and influence their preferences.
Cambridge Analytica is a notable example of an organization
that utilized these technologies. It created approximately 220
million personality profiles in the U.S. alone, using up to 5,000
data points per individual. This extensive profiling allowed for
highly targeted advertising campaigns, which impacted voter
behavior and turnout, as seen in the UK’s 2017 general elections
and the 2018 US midterms.40
The Brookings report, “Malevolent Soft Power, AI, and the
Threat to Democracy” highlights how advancements in artificial
intelligence (AI) and other technologies pose new challenges to
democratic processes. The key findings emphasize that while the
internet can mobilize political action, it also has the potential to
disseminate false information, suppress votes, and interfere with
election machinery. By 2016, social media had shifted from
being a democratic tool to a weapon against democracy.41
The report details Russia's interference in the 2016 U.S.
election, noting that similar activities occurred in other
countries, including Ukraine, Brexit in the UK, and various
European nations. The aim of this interference was not just to
influence elections but to undermine trust in democratic
systems. Allegedly, Russian operatives stole internal campaign
data to target voters and use disinformation to suppress voter
turnout and create divisions among voter groups.42
40
“Jacob Bergdahl, ‘How AI Can Make You The President’, for politicians aiming for a higher seat of power, there
is an immense power in understanding artificial intelligence-technologies — no matter the ethics, Jun 4, 2019.”
41
“Elaine Kamarck, BROOKINGS, ‘Malevolent soft power, AI, and the threat to democracy’ Thursday, November
29, 2018 https://www.brookings.edu/research/malevolent-soft-power-ai-and-the-threat-to-democracy/.”
42
Id.
An article from The Atlantic outlines several methods of
interfering in elections without violating existing laws:
1. State-Run News Organizations: Exploiting state-supported
media like RT, Sputnik, China Daily, and Al Jazeera to inject
political agendas into journalism. These outlets can advocate
for or against candidates or parties openly or covertly.
2. Online Political Advertising: Many democracies lack
regulations on foreign financing for online campaigns and
ads. This loophole allows governments and non-state actors
to use fake news, disinformation, and fake social media
accounts to distort political sentiment and achieve strategic
outcomes. AI and machine learning tools enhance the
effectiveness of these campaigns.
3. Non-Profit Organizations: Using non-profits, which are
ostensibly for social welfare, to influence elections. These
organizations can funnel money for political purposes under
the guise of charity.
4. Foreign Influence Through Public Figures: High-profile
individuals, like Pope Francis during the 2016 U.S.
presidential campaign, can exert influence on electoral
outcomes. Although not directly illegal, such statements can
sway public opinion and impact election results.43
These methods, while sometimes operating within legal
boundaries, represent significant threats to democratic
integrity. The misuse of technology and media to spread
disinformation, suppress votes, and manipulate public opinion
43
“Uri Friedman, 5 Ways to Interfere in American Elections—Without Breaking the Law. To influence U.S.
politics, foreign governments don’t have to hack one party and collude with the other. July 24, 2017.”
not only impacts individual elections but also erodes overall
trust in democratic institutions and processes. As AI and social
media technologies continue to evolve, addressing these threats
and safeguarding democratic practices become increasingly
critical.
Dov H. Leven’s study, “When the Great Power Gets a Vote: The
Effects of Great Power Electoral Interventions on Election
Results” examines historical cases of great power interventions
in foreign elections. The 1977 Indian parliamentary elections
serve as a notable example. The Soviet Union covertly
intervened in favor of Mrs. Indira Gandhi and the Congress
party. However, despite this intervention, the Congress party
suffered a significant defeat, losing over 150 seats to the Janata
party, which won an absolute majority with 295 seats out of 542
in the Lok Sabha. The Soviet intervention was estimated to have
influenced only around eleven additional seats, translating to a
vote swing of approximately two percent—insufficient to affect
the overall election outcome.44 This case underscores that even
well-intentioned interventions may fail to achieve their desired
results, especially when the broader electoral trend is strongly
unfavorable to the intended beneficiary. The impact of such a
failed intervention on international relations, if any, warrants
further examination.
India, with its vast population of active social media users,
presents a fertile ground for digital electoral interventions. The
country has approximately 250 million Facebook users, 400
million WhatsApp users, and over 800 million mobile phone
users. WhatsApp, in particular, has played a significant role in
recent Indian elections, although its influence has received less
global attention compared to Facebook and Instagram. The
44
“Levin, Dov H. (June 2016). “When the Great Power Gets a Vote: The Effects of Great Power Electoral
Interventions on Election Results.” Pp200.”
platform has been used not only for ordinary campaign
messages but also for spreading sectarian tensions and false
information.45 The widespread use of mobile internet and social
media in India facilitates sophisticated voter profiling and
targeted campaigning, potentially influencing electoral
outcomes.
Srikanth Kondapalli, a Professor of Chinese Studies, suggests
that China's increasing trade and investment in India may be
linked to growing political influence. Chinese technology
companies, such as Huawei, ZTE, Xiaomi, Vivo, and Oppo, have
a substantial presence in India. Many of these companies
produce mobile devices with Chinese operating systems and
apps, and Chinese firms like Hikvision provide surveillance
equipment for various Indian government agencies. While no
conclusive study has confirmed vulnerabilities arising from this
technological footprint, Kondapalli warns that China's growing
influence could extend to electoral interference. He cites the
2017 decision of the 19th Communist Party Congress, which
marked China’s intention to "occupy the centre stage" globally
and export the "China model." This ambition could involve using
technology and soft power to influence other countries' electoral
processes.46
The intersection of technology, social media, and foreign
influence presents significant challenges to democratic
processes. The examples of Soviet intervention in India and the
role of social media in recent Indian elections illustrate the
complexities of modern electoral interference. The potential for
Chinese technological influence adds another layer of concern.
45
“Vindu Goel, ‘In India, Facebook’s WhatsApp Plays Central Role in Elections’
https://www.nytimes.com/2018/05/14/technology/whatsapp-india-elections.html May 14, 2018 retrieved on
January 20, 2020.”
46
“Srikanth Kondapalli, China’s influence on the elections in India China’s economic clout is transforming -
slowly but surely - into political leverage.”
Given these developments, there is a pressing need for further
research into the methods and impacts of foreign influence on
elections. Developing safeguards and pre-emptive measures to
protect the integrity of electoral processes is crucial for
maintaining democratic sanctity in an increasingly
interconnected and digital world.
Sovereignty necessitates both internal and external legitimacy. 47
Internal legitimacy involves the acceptance of a government’s
authority by its citizens, while external legitimacy pertains to
the recognition of the state by other nations. Sovereignty is
concerned with three key "assets" that require governance: (1)
power, termed foundational sovereignty; (2) physical and digital
assets, including territory, known as territorial sovereignty; and
(3) the organizational structure of economy, society, and
democracy, referred to as institutional sovereignty (Bickerton et
al., 2022). These concepts of internal and external legitimacy
align with foundational, territorial, and institutional sovereignty.
For example, state sovereignty involves the internal and
external acknowledgment of power arrangements. Effective
governance requires authoritative institutions, including those
that ensure free elections. "Territory" can encompass resources,
values, and culture that must be recognized both internally and
externally. Lastly, government institutions need domestic
acceptance and international legitimacy, which can include
aspects like extraterritorial jurisdiction.48
Deepfakes undermine the concept and institutions of democracy
as well. Meiklejohn’s theory posits that freedom of speech would
strengthen democracy by encouraging deliberations and
discussions leading to the triumph of ‘popular sovereignty’.
47
“Biersteker, T., 2012. State, Sovereignty and Territory, in: Handbook of International Relations. SAGE
Publications Ltd.”
48
“Klabbers, J., 2021. International Law, third edition. ed. Cambridge University Press, Cambridge.”
Let's examine how digital technologies are altering and being
influenced by sovereignty. Traditionally, sovereignty was
associated with physical assets and geographical space. In the
digital age, however, sovereignty now extends to digital assets
like national digital identities, domain names, health data, and
digital representations of products or smart cities. Cyberspace
has introduced a mix of physical elements such as servers and
data centers, which are subject to national jurisdiction,
alongside transnational networks and digital services that often
operate beyond physical boundaries. Some digital services can
circumvent traditional sovereign jurisdiction by operating in the
cloud, leading to an expanded concept of territory in the digital
era. Countries that fail to recognize the importance of digital
territory may risk losing valuable assets. For example, genetic
and health data are not just ordinary data but are considered
national assets because they originate from the population and
contribute to products and services like new medicines. To
maintain control over digital technology within traditional
notions of sovereignty, states might implement data localization,
creating technological borders. Another approach is to control
access and usage of data under national jurisdiction, supported
by technologies such as homomorphic encryption, which allows
data to be processed without being exposed. Thus, digital
technology increasingly shapes and defines the institutional
aspects of sovereignty.
One of the hallmarks of Indian democracy is free and fair
elections. Foreign intervention in such elections is a
cybercrime, made easily possible through developing AI
and deepfakes. It is a direct attack on the sovereignty and
integrity of India. In a democracy, the power lies with the
people to elect a leader, and they have the right to have
access to truthful and accurate so as to make informed
choices while casting their votes. A government cannot be
said to be legitimate if it is elected through falsifications.