The Inevitable Dangers of AI
Orion Jones
University of Maryland Global Campus (UMGC)
As artificial intelligence continues to evolve, its potential to disrupt creative
and work fields raises significant concern across the world, particularly through
technologies such as voice and video cloning, these advancements enable AI to
replicate human voices and appearances, and recently, the introduction of text-
based video prompts, which has sparked serious ethical dilemmas. For instance,
there has been deepfake software can fabricate realistic videos that depict
individuals saying or doing things they never actually did, which has resulted in
critical issues like misinformation and identity theft. A notable example is
reported by the British Broadcasting Channel (BBC), where a disgruntled employee
named Dazhon Darien used AI to create and disseminate a fake racist rant
purportedly made by the well-known principal named Eric Eiswert in a Baltimore
suburb which "Divided the Community".
Spring, M. (2024, October 4). The AI clip that convinced - and divided - a
Baltimore suburb. https://www.bbc.com/news/articles/ckg9k5dv1zdo
This incident has underscores a troubling trend in which recorded proof no longer
guarantees authenticity anymore, raising pressing questions about trust and
accountability in media, we are entering a point of time where we may no longer
prove situations have happened without seeing it ourselves since anyone can produce
and spread convincing yet misleading content.
The implications for public perception and societal cohesion will be become
increasingly dire, with the rapid and unchecked proliferation of AI technologies in
creative sectors threatens which only to diminish the perceived value of human
artistry but also to create an environment ripe for exploitation and manipulation.
This sadly applies to the elderly, which have frequently been the target for many
scams, as you get older you're less likely to keep up to date with what's currently
happening and there's been studies which theorize why,there's been an article
written by on the "getjubileetv" website which I'll summarize: "Older adults seem
to often struggle with technology due to a combination of factors, unfamiliar
jargon, complex interfaces, cognitive changes, and physical challenges and these
difficulties can lead to feelings of isolation and a lack of independence, which a
lot of older generations are prided on. Along with, fears about making mistakes or
falling victim to scams can further deter them from engaging with digital tools."
JubileeTV. (N/A). Why do elderly struggle with technology? October 4, 2024, from
https://getjubileetv.com/blogs/jubileetv/why-do-elderly-struggle-with-technology.
Sadly, not all of elderly people question the whether there are scams online or
not, the scam I'm about to introduce to you is an updated scam, usually hailing
(but not limited to) India there'd be a caller from a scam center who's mission is
to impersonate a grandson, granddaughter, niece or nephew with data they have
scraped from data breaches in which they'll have access to your full name, phone
number and family members. This has been dubbed the "ransom scam" which I've
sourced from a youtuber by the name of "Kitboga" who frequently deals in spreading
awareness in these situations by frequently pretending to be an elderly person.
Kitboga (2021, July 9). They Ransomed My Grandson For $10,000. YouTube.
https://www.youtube.com/watch?v=dSU53vVdouw
Sadly, not every scammer can come across a well aware person such as him, in a
case I'd like to highlight and summarize from the "Canadian Broadcasting
Corporation." (CBC.com) the sad reality behind the matter.
"A 75-year-old woman in Regina named Jane, fell victim to a grandparent scam that
leveraged voice-cloning technology to impersonate her grandson, believing her
grandson was in trouble after an accident, Jane received a call requesting over 7
thousand dollars for bail. The scammer, posing as her grandson and then as a
lawyer, (which frequently happens, as shown in the Kitboga video) had pressured her
to keep the situation secret, she then went to the bank, claiming she needed the
money for a car purchase (a frequent lie elderly people are told to not sound
suspicious), and handed the cash to a man who came to her house. The next day, she
had received another call claiming the accident victim had suffered a miscarriage,
leading her to withdraw an additional $10,000 to send by mail. It wasn't until she
heard a news story about similar scams that she contacted her real grandson and
realized she had been conned.
Experts, including Jonathan Anderson from Memorial University, note that scammers
can easily and cheaply clone voices using AI, requiring as little as 30 seconds of
audio. Police are investigating the scam, emphasizing that fraudsters exploit panic
and urgency to manipulate victims. They advise individuals to verify calls directly
with their loved ones and be cautious about personal information shared online.
Staff Sgt. Matthew Bradford highlighted that no legitimate legal professional would
demand money over the phone, encouraging people to take control of such situations
by confirming details independently."
Dudha, A. (2023, June 18). Fraudsters likely using AI to scam seniors, Saskatoon
police say. CBC News. https://www.cbc.ca/news/canada/saskatoon/fraudsters-likely-
using-ai-to-scam-seniors-1.6879807
In the two examples I've given you can clearly see a pattern. In the youtube video
three years ago they can parallel the scam from last year in the exact same way,
but this time they'll have more accuracy because the person they're attempting to
imitate has a voice they can clone. You may no longer be able to speak online
because many voice cloning programs require there to be "sample audio" for it to
clone your voice. I've watched videos that have gone far into detail about these
programs, before they couldn't even replicate accents, but since AI is rampant in
bettering itself it has quickly overcome it. In this video, it goes over how to
exactly clone your "own" voice, there are no measures being taken to ensure that'd
be the case and even so it'd be counter-productive to the company to implement, you
can display disclaimers all you want but it obviously does little to deter
criminals who you would have a hard time pursuing.
Source: CNET (2023, June 21). AI Voice Cloning Tutorial: How To Clone Your Own
Voice. YouTube. https://www.youtube.com/watch?v=ddqosIpR2Mk
Among all this fear Microsoft has mentioned that their new toy "VALL-E 2" has
reached "human parity" just from a "voice source" and "text prompts" yet claims
it's for "research demonstration purposes only", but what happens when another
voice cloning software reaches this point? What if this software gets leaked by a
disgruntled employee or gets leaked in a data breach? Chaos. We can only hope to
live a life where nobody has to wonder if we've actually said the words that
they've heard.
Source: Perry, A. (2024, July 11). Microsoft’s VALL-E 2 could make AI even more
dangerous. Mashable. https://mashable.com/article/microsoft-vall-e-2-ai-dangerous?
test_uuid=01iI2GpryXngy77uIpA3Y4B&test_variant=a
As we've gone over other issues in this landscape on how they could be
potentially dangerous, the boundaries of originality and authorship blur, which
have prompted critical discussions about the ethics of using AI-generated content,
but even if you're not on the internet frequently you can also be affected by "AI
generated books" These books have flooded the markets in places such as amazon,
some are harmless but tasteless and I think they're a scam. What I would like to
turn your attention to are AI's implementation inside foraging books and In this
thought-provoking video, a YouTuber by the name of "Atomic Shrimp" explores the
dangers associated with the AI-generated books titled "The Simple Forager's
Ultimate Guide to Nutritious Wild Foods: Unlocking the Secrets of Wild Edibles,
Lichens, and Mushrooms and with the other called "The Forager's guide to WILD FOODS
"The Ultimate Handbook On How To Locate, Identify, Harvest and Cook Wild Edible
Plants, Berries And Mushrooms." He points out various things that AI does like
overexaggerate a sentence, for instance instead of "Here's how to make delicious
food!" Chat GPT is more likely to say something like: "Here's how to plan, prepare
and make food that is delicious, nutritious and fun, in ways that will surprise,
entertain and delight your friends, family and guests" AI is very keen on
overselling itself and that's a good thing, allowing people to have the know-how to
recognize it when they see it. There are also long paragraphs which say a lot yet
barely give any information. These books have repeated pieces, probably from
different prompts, with the exact same information, but the real killer here could
genuinely kill you, as it gives a description of a "Wild Carrot" (Daucus carota)
with the description of it being a "White, tapered root resembling a domestic
carrot, often with feathery leaves" the issue here is the fact that despite it
giving a description it has no pictures and many plants have a similar description,
such as "Fool's Parsley" which is toxic and "Hemlock Water Dropwort which could
actually kill you if consumed. There also isn't any geographical indicators within
the book, you could search for an item within the description and find a completely
toxic or poisonous item.
Source: Atomic Shrimp (2024, August 30). DANGEROUS Fake Foraging Books Scam on
Amazon - Hands-On Review of AI-Generated Garbage Books
https://www.youtube.com/watch?v=kwp_WEdJaEk
There have been cases which foragers from one place mistake it for another, for
instance there's a British Broadcasting Corporation article (BBC) detailing the
matter in which "A tragic incident in Australia involved a chef, Liu Jun, who
prepared a dinner for colleagues on New Year's Eve at the Harmonie German Club in
Canberra, using toxic death cap mushrooms. Both Liu, 38, and kitchen hand Tsou
Hsiang, 52, died in Sydney hospitals. Another colleague was treated but has since
been discharged, while a fourth guest did not consume the mushrooms. Health
officials confirmed that the bistro is not a public health risk, as the meal was
not part of the regular menu.
Liu likely mistook the poisonous mushrooms for the edible Paddy Straw variety. The
incident has left the local Chinese community in shock, especially since Liu was
working in Australia to support his family in China, whom he hadn’t seen in years.
He has an 11-year-old daughter and a seven-year-old son."
Source: BBC News. (2012, January 6). Deadly mushrooms cooked by chef for co-workers
https://www.bbc.com/news/world-asia-16437195
I picked this source because for a single mistake you can lose everything,
depending on an AI generated book is without a doubt dangerous.
Ultimately, the rapid advancement of artificial intelligence will definitely pose
profound challenges across various walks of life, from the creative arts to
personal safety. As evidenced by recent incidents involving voice cloning and
misinformation, the potential for exploitation and manipulation is shockingly high,
and this risk is guaranteed only escalate as technology continues to evolve. The
multiple cases of scams targeting vulnerable populations, particularly the elderly
should underscore the urgent need for vigilance and education about these
technologies. If you have an elderly person in your life, the best you can take is
to educate them about the potential threats and the strategies for identifying
these scams. YouTubers like Kitboga have made significant contributions to
spreading awareness and providing practical advice, showcasing the importance of
community support in combating these modern threats.
Moreover, the use of AI in generating content especially in sensitive areas such
as foraging raises significant ethical concerns. Misinformation in these contexts
can lead to life threatening, or even life ending, consequences. The potential
dangers associated with AI-generated foraging books, which often leave out critical
identification features, serve as a stark reminder of this risk where line between
authentic knowledge and dangerous misinformation has blurred, emphasizing the
necessity for careful scrutiny and informed decision-making. In an age where anyone
can publish content, distinguishing credible sources from dubious ones becomes
increasingly challenging, and society integrates AI technologies into daily life,
we see this trend exemplified by the incorporation of AI features in devices like
smartphones. The iPhone, for instance, has integrated AI functionalities that have
contributed to its status as one of the largest tech companies in the world.
However, as we embrace these advancements, it is imperative that we prioritize
ethical considerations and accountability. This means implementing robust
safeguards against misuse and promoting practices that protect individuals from
potential harm.
Encouraging media literacy is essential in this context. People must be equipped
with the tools to critically evaluate information they encounter and to analyze the
credibility of sources. Fostering an understanding of AI tools will be crucial in
mitigating the risks they pose.