AI’s Double-Edged Sword: The dangers and
implications of deepfakes
Imagine a world of uncertainty where no one believes no one and distinguishing between the real
and the fake is impossible. A doctored video clip has enough power to sway public opinion,
manipulate young brains, target politicians and famous personalities, endanger democracy, and
thus cause societal chaos on a huge scale. The challenges that technological advancements brought
have become a curse for humanity and a blessing for brutality. The untackled seepage from the
judicial system permits anyone to misuse anyone’s digital property. This imagined world of chaos
is not much different from our reality where the existing problem due to deep fakes is the tip of
the iceberg.
With the advancements in artificial intelligence, the chances of harm caused by fabricated videos
are higher than ever. To create such fake videos, deep fake technology is used. You must have
come across such fabricated content that seemed real but hard to believe. If yes, you might have
been exposed to a deep fake. A deep fake is a video, audio, or picture, manipulated using deep
learning, a branch of AI. The word “deepfake” was first used by a Reddit user in 2017 who used
to superimpose celebrities'’ faces onto pornographic content using deep learning. With the
availability of more computing power over the years, machine learning algorithms have become
more and more sophisticated, increasing the quality of deep fakes. Throughout history, generative
adversarial networks fueled the evolution of deep fakes.
Like others, deep fake technology can also be used for both good and evil purposes. Regarding the
betterment of mankind, deep fake technology has use cases in industries such as healthcare and
entertainment. “During the coronavirus pandemic, it was difficult to diagnose the diseases that
arise from coronavirus infection. It was due to the lack of X-rays, CT scans, and MRI images and
the resources to diagnose whether the patient had the disease or not. Here came the use of deep
fake technology; computer scientists first produced deep fake images with the help of artificial
intelligence and gave them to artificially intelligent models for training. The models were able to
compare the deep fake images and that of the patient to diagnose whether he had the disease or
not.
Moreover, training artificially intelligent models on people’s data can create privacy concerns and
accuracy problems. To tackle these challenges, realistic synthetic data is produced using deep fake
technology. In 2019, Canny AI, an Israeli startup created a doctored video of Facebook’s CEO
Mark Zuckerberg, saying “Imagine a man controlling billions of people's data and thus their lives
and future”. Shockingly, the video was indistinguishable. It was made using deep fake technology
on a 2017 footage of Mark. The video was made to raise awareness about the harm deepfakes can
cause in society.
Deep fakes can have a severe impact on the public. Through such content, bad actors can spread
misinformation to fulfill their evil ambitions. It may include having illegal financial gains,
generating more clicks, or igniting social unrest by misleading the masses. One such incident
occurred recently when a manipulated video featuring Elon Musk was spread through social media
for someone’s monetary interests. Following the video in which Elon was promoting a new
cryptocurrency, many people heavily invested in cryptocurrency causing a major change in the
crypto price. That was the case with the people living in Europe. On the other hand, in countries
like Pakistan, where more than half of the population is illiterate and a small number of literate
ones have technical knowledge, the odds of havoc are the most.
Politically doctored deep fakes can also pose an ominous danger to today's democratic landscape.
Malicious actors can use deepfakes to sway public opinion about a specific politician. For instance,
a fake video of a US politician, Nancy Pelosi, went viral on social media in which she appeared to
be drunk. Also, earlier this year, nearly 25000 robocalls were made to the residents of New
Hampshire. The fake voice of Joe Biden was saying not to vote in the primary elections, instead
reserving it for the general elections. Just like these videos can tarnish the reputation of a political
personality, he can also make a narrative about an authentic video, featuring his illegal act, as a
deepfake. In this way, even the greatest democracies can be targeted by deep fake technology.
Recent innovations in artificial intelligence have added fuel to the fire. Production of close-to-real
deep fakes usually took a few days until Open AI’s text-to-video model, Sora, was launched. Sora
is a highly capable tool and can create realistic videos which are almost indistinguishable. Another
problem is the open-source nature of such AI tools. Anyone from anywhere in the world can
generate content using the tools. Technology is becoming more and more sophisticated with time.
So, the present chaos due to deep fakes is just the beginning of the end.
Identifying deepfakes is a daunting task for those without technological understanding.
Nonetheless, various methods exist for identifying these deceptive digital creations. Foremost,
developing a zero-trust mindset is important. Never trust anything without verifying it. Fake videos
can also have many signs including differences in the skin textures and body parts, less
synchronization between lip movement and voice, abnormal blinking patterns and unusual facial
expressions, etc. With the increasing sophistication of generative AI models, discerning deepfakes
is becoming increasingly challenging. So, using technological detection systems is also necessary.
Governments and big tech giants can play their roles in preventing the proliferation of deceptive
content. Governments can implement legislation against the dissemination of malicious content on
social media. Through social awareness and public education, the harms of deepfakes can be
greatly reduced. Meanwhile, big tech giants can promote the development of robust machine
learning algorithms for the detection and elimination of such content on their platforms.
Additionally, investing in research and development of advanced detection algorithms and forensic
tools can augment the capacity to identify and mitigate the impact of deepfakes.