0% found this document useful (0 votes)
83 views5 pages

Rhetorical Analysis

In her TED Talk, AI researcher Sasha Luccioni discusses the environmental impact and other issues arising from the development of artificial intelligence. She explains that training large AI models requires massive amounts of energy, with one model needing as much power as 30 homes annually. Additionally, AI faces problems with bias, copyright infringement from using artists' work without permission, and overreliance by law enforcement despite flaws in facial recognition. Luccioni argues companies must increase transparency around AI's carbon footprint and implement mechanisms to monitor flaws in AI as it continues to grow in order to build more sustainable, reliable and credible systems.

Uploaded by

api-711489277
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views5 pages

Rhetorical Analysis

In her TED Talk, AI researcher Sasha Luccioni discusses the environmental impact and other issues arising from the development of artificial intelligence. She explains that training large AI models requires massive amounts of energy, with one model needing as much power as 30 homes annually. Additionally, AI faces problems with bias, copyright infringement from using artists' work without permission, and overreliance by law enforcement despite flaws in facial recognition. Luccioni argues companies must increase transparency around AI's carbon footprint and implement mechanisms to monitor flaws in AI as it continues to grow in order to build more sustainable, reliable and credible systems.

Uploaded by

api-711489277
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Guevara 1

Shawn Guevara

Professor Ferrara

English 1001

17 November 2023

Rhetorical Analysis:

Over the past few decades AI and machine learning have been developing at a

rapid pace. As a result of this, the budgeting and carbon footprint of applications and

machines have skyrocketed. But what does this mean for us humans? What about the

Planet? In the TedTalk AI Is Dangerous, but Not for the Reasons You Think, Sasha

Luccioni, an AI researcher with over a decade of experience, discusses the strengths

and weaknesses of AI. She describes the situation as “AI doesn’t exist in a vacuum, it is

part of society.” (1:27-1:31) And it is because of this that problems can form and

develop quickly if we do not monitor the growth of artificial intelligence closely. Luccioni

believe that we should start “tracking its impacts and being transparent” this way AI can

be understood better (1:48-1:53).

Luccioni has been a part of numerous AI research projects such as Nuance

Communications, Morgan Stanley AI, ML Center of Excellence, BLOOM, and more. She

has also acquired a Ph.D in Cognitive Computing as well as a Master of Science in

Cognitive Science. Nonetheless, she is the Climate Lead and AI Researcher for

Hugging Face, a machine learning platform and community that deals with machine

learning models. She is in a line of work concerning the intersection of Artificial


Guevara 2

Intelligence and Climate Change. For these reasons, she has been considered by many

to be a credible source for AI/Machine Learning research.

Luccioni shares her experiences whilst working collaboratively on BLOOM

created by the BigScience Initiative. BLOOM is a software company whose goal is to

connect data sources with your ideas to build/generate anything you want, similar to

other AI Chat Bots such as ChatGPT (2:20-2:28). She has learned that the AI needed

as much energy as 30 homes consume in an entire year, just to train it (2:41-2:47). This

begs the question of whether this pathway of growth for AI is worth the damage to our

planet? Interestingly enough, the more known ChatGPT AI model, emits more than

twenty times more than BLOOM for its training. Luccioni says that, “this is only the tip of

the iceberg” since tech companies are not monitoring and disclosing this information,

which leads most people to believe that this is only a small percentage of the entire

carbon footprint created by AI and Machine Learning companies (3:07-3:10).

While new technology is being developed and released every day, environmental

costs are piling up quickly. This is because the current trend in AI is as Luccioni

describes it, “bigger is better”(3:17-3:18). What she means by this is that rather than

having a simple, AI/Machine Learning tech company focused on let’s say, everything to

know about plastic pollution, or limiting the software to only a few languages, you allow

your competitors who are looking at the bigger picture to have an advantage over your

company. Because at the end of the day, why have thousands of companies

specializing in one field of knowledge, when your company can succeed in all and take

in all the customers? This is why model size for BLOOM has skyrocketed in the past
Guevara 3

few years, overtaking companies such as OpenAI (creator of ChatGPT), Microsoft, and

Google (3:16-3:20).

However, In the process of developing and training AI, some ill-natured practices

can arise. These practices are known as copyright infringement, which in other words is

use of someone else’s work for your own without giving them credit. This unfortunately

happens more often than not, and media such as digital artwork created by real people

are being used to train AI without their owners' consent or knowledge. Yet sometimes,

these artists are not credited when their creations are being used without their

knowledge. This has led to class action lawsuits for copyright infringement. During this

section, Luccioni shares details on a true case between an artist and the company

LAION5B. This case regarded a woman named, Karla Ortiz, whose artwork was

unrightfully used to train AI. Luccioni also shares that she and 2 more artists used this to

gather evidence to file a class action lawsuit (5:37-5:52).

Additionally, Luccioni brings up the topic of bias. She builds on this talking about

how “AI models build patterns and beliefs that can represent stereotypes, racism, and

sexism” (6:23-6:28). Facial recognition AI systems, although relatively new, have been

trusted in law enforcement settings. These systems have been struggling to identify

peoples faces, especially if they are of color, Luccioni uses the example of Joy

Buolamwini, a computer scientist who has also been on TedTalk before, and her

experiences with facial recognition systems. Buolamwini noted that some AI facial

recognition systems would not even detect her face unless she wore a white mask

(6:33-6:37). This facial recognition system, despite its flaws, is already being used to

identify criminals, and has even led to false imprisonment in this time. For example,
Guevara 4

Luccioni adds that there was a case where a pregnant woman named Porcha Woodruff

was wrongly imprisoned for carjacking whilst she was 8 months pregnant. All because

she was misidentified by an AI system (6:59-7:06). Which indicates that artificial

intelligence should not depended this early in its development just yet.

One last thing worth mentioning is the importance of what Luccioni says at (9:02-

9:11), “It's really important that AI stays accessible so that we know both how it works,

and when it doesn’t work.” What she means by this is that it is important to set systems

in place to monitor the flaws of AI, like mentioned earlier. The only difference now,

nearing the end of the TedTalk, is that it is more significant to the audience listening,

because for the duration of this TedTalk, Luccioni has been informing us about the

growth of AI along with its weaknesses. If AI companies began to incorporate these

systems into their softwares, they could begin developing solutions to bias, climate

change, and copyright infringement, this way not only will their AI benefit, but also the

understanding of AI to the general public as well. With this in mind, we can hopefully

build toward low emission, reliable, and credible AI in the near future.
Guevara 5

Work Cited:

Luccioni, Sasha. “AI Is Dangerous, but Not for the Reasons You Think.”

https://www.ted.com/talks/

sasha_luccioni_ai_is_dangerous_but_not_for_the_reasons_you_think

You might also like