0% found this document useful (0 votes)
56 views9 pages

Round 4 Aff

The document argues that the development of Artificial General Intelligence (AGI) is immoral, emphasizing the importance of morality in preventing structural violence and advocating for the inclusion of marginalized populations. It highlights the discriminatory practices inherent in AI, particularly against gender, race, and sexual orientation, and critiques the outdated eugenic concepts that underpin AI research. The text calls for a critical examination of how AI perpetuates biases and the need for ethical considerations in its development and application.

Uploaded by

vhar71957
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views9 pages

Round 4 Aff

The document argues that the development of Artificial General Intelligence (AGI) is immoral, emphasizing the importance of morality in preventing structural violence and advocating for the inclusion of marginalized populations. It highlights the discriminatory practices inherent in AI, particularly against gender, race, and sexual orientation, and critiques the outdated eugenic concepts that underpin AI research. The text calls for a critical examination of how AI perpetuates biases and the need for ethical considerations in its development and application.

Uploaded by

vhar71957
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

I affirm: the development of Artificial General Intelligence is immoral.

First a definition:
Pascal and Lazar 24 [Alexander Pascal, Senior Fellow at the Ash Center for Democratic Governance and Innovation and a
Professor of Practice at the Fletcher School of Law and Diplomacy at Tufts University, and Seth Lazar, a Professor of Philosophy at the Australian
National University, Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI, fellow of the Carnegie Endowment for
International Peace, and a Senior AI Advisor to the Knight First Amendment Institute at Columbia University, 3-28-2024, “AGI and Democracy,”
Harvard ASH Center for Democratic Governance and Innovation, https://ash.harvard.edu/resources/agi-and-democracy/]/Kankee

Defining AGI But what even is AGI? Defining it sometimes feels like pinning Jell-O to a wall. But as progress accelerates, something like
a consensus is emerging. Synthesising a vast literature, we can say that AGI would be a nonbiological

computational system that can perform any cognitive function currently


performed by humans at the level of the median human or better (acknowledging the crude quantification this implies).
Google DeepMind’s recent paper mentions “linguistic intelligence, mathematical and logical reasoning, spatial reasoning, interpersonal and
intra-personal social intelligences, the ability to learn new skills and creativity.” Other AI researchers would also add instrumental rationality,
causal reasoning, tool use, and at least some ability to distinguish truth from falsehood. OpenAI calls it, simply, “AI systems that are generally
smarter than humans.”

FW
The value for this debate is morality, as that is the resolution's focus. We should
determine morality through a criterion of preventing structural violence.
1] In-group and out-group dynamics pre-reqs moral consideration - resisting
structural violence is key to upholding justice.
Winter and Leighton 99 [Deborah DuNann Winter and Dana C. Leighton Winter :Psychologist that specializes in Social Psych, Counseling Psych, Historical and
Contemporary Issues, Peace Psychology. Leighton: PhD graduate student in the Psychology Department at the University of Arkansas. “Peace, conflict, and violence: Peace psychology in the
21st century.” 1999] kp

to recognize the operation of structural violence forces us to ask questions about how
Finally,

and why we tolerate it, questions which often have painful answers for the privileged elite who unconsciously support it. A final question of this
section is how and why we allow ourselves to be so oblivious to structural violence. Susan Opotow offers an intriguing set of answers, in her article Social Injustice.

our normal perceptual/cognitive processes divide people into in-groups and


She argues that

out-groups. Those outside our group lie outside our scope of justice . Injustice that would be
instantaneously confronted if it occurred to someone we love or know is barely noticed if it occurs to strangers or those who are invisible or irrelevant. We do not

we draw conceptual lines between those who


seem to be able to open our minds and our hearts to everyone, so

are in and out of our moral circle. Those who fall outside are morally excluded,
and be- come either invisible, or demeaned in some way so that we do not have
to acknowledge the injustice they suffer. Moral exclusion is a human failing, but Opotow argues
convincingly that it is an outcome of everyday social cognition. To reduce its nefarious effects, we must be vigilant

in noticing and listening to oppressed, invisible, outsiders. Inclusionary thinking can be fostered by relationships, communication, and apprecia tion
of diversity. Like Opotow, all the authors in this section point out that structural violence is not inevitable if we become

aware of its operation, and build systematic ways to mitigate its effects. Learning about structural violence may be discouraging, overwhelming, or
maddening, but these papers encourage us to step beyond guilt and anger, and begin to think about how to reduce structural violence. All the authors in this
section note that the same structures (such as global communication and normal social cognition) which feed structural violence, can also be used to empower
citizens to reduce it. In the long run, reducing structural violence by reclaiming neighborhoods, demanding social jus- tice and living wages, providing prenatal care,
alleviating sexism, and celebrating local cultures, will be our most surefooted path to building lasting peace.

2] You have a moral obligation to prioritize the slow violence and everyday war
against disenfranchised populations.
Hunt 18 - (Dallas Hunt, PhD Candidate, University of British Columbia, Canada., Chapter 10 “Of course they count, but not right now”:
Regulating precarity in Lee Maracle’s Ravensong and Celia’s Song, in Biopolitical Disaster Edited by Jennifer L. Lawrence and Sarah Marie Wiebe,
2018 Routledge, JKS)
“There is a hierarchy to care”: theoretical concerns and applications

In Frames of War (an extension and preoccupation with similar issues she outlines in her text Precarious Life), Judith Butler focuses on the ways
in which particular, violent perceptions of everyday life are normalized and propagated as
legible or granted “intelligibility” (through numbers, statistics, etc.). According to Butler, Frames of War follows on from Precarious Life ...
especially its suggestion that specific lives cannot be apprehended as living . If certain lives do

not qualify as lives or are, from the start, not conceivable as lives within certain epistemological frames, then these lives are
never lived nor lost in the full sense. (2010: 1) For Butler, then, a primary concern is how these intelligibilities allow “a state to wage its
wars without instigating a popular revolt” (xvi). Although Butler is writing within the context of the Iraq War and the “War on Terror,” her
insights on precarity and modes of state violence exceed their immediate rele- vance. Indeed, as is clear below, the notions of war and settler-
colonialism and the biopolitical rationalities they allow are eminently applicable to a local, Canadian context. The frames of war, Butler argues,
are not circumscribed to combat zones with the mobilization of weapons. Instead, to Butler, “perceptual weapons” are acting on
populations consistently to naturalize violences and enlist citizens to tacitly consent to (and, in some cases,
actively participate in) violent forms that authorize dehumanization: “[w]aging war ... begins with the assault on the senses; the senses are the
first target of war” (xvi). These perceptual violences resonate with Rob Nixon’s formulation of “slow violence” as well. To Nixon, slow
violence is “a violence that occurs gradually and out of sight, a violence of delayed destruction that is dispersed across time
and space, an attritional violence that is typically not viewed as violence at all” (2011: 3). Further, and “[c]rucially,
slow violence is often not just attritional but also exponential, operating as a major threat multiplier; it can fuel long-term, proliferating conflicts
in situations where the conditions for sustaining life become increasingly but gradually degraded” (4). Conditioning the senses or what is
intelligible, then, functions as the way in which state violences are legitimized, as the frames of war dictate the “sensuous parameters of reality
itself” (ix). According to Butler, the task at hand is not only to “understand ... these frames, where they come from and what kind of action they
perform” (2010: 83), but also to find and articulate “those modes of representation and appearance that allow the claim of life to be made and
heard” (81). While Butler is exam- ining conditions of precarity, (in)security, and disposability in the context of “the War on Terror,” and
Palestine–Israel, her examination of an imperial/ colonial power exerting force and enacting violence on vulnerable and racialized populations
(and in the process producing and reproducing these vulnerable populations) can be fruitfully employed in the Canadian context, though not
without some alteration. Although we may not perceive the more mundane, i.e. non-military, violences visited upon Indigenous communities as
“war” strictly speaking, Sora Han’s oft-cited phrase that we must think of the United States (and settler-colonial nations more broadly) not “at
war” but “as war” is useful here (cited in Simpson 2014: 153, emphasis in original). If we view the biopolitical man- agement of Indigenous
populations and Indigenous territories as rationalities rooted in the organizing frame of settler-colonialism, then the states of emer- gency
putatively thought to be produced through war are “structural, not eventful” – that is to say, war is the very condition of settler-colonialism and
not a by-product of it (154). Indeed, the largest ever domestic deployment of military forces in North America took place within Canada, in the
context of the so-called “Oka crisis.” As Audra Simpson writes, the “highest number of troops in the history of Indigenous-settler relations in
North America was deployed to Kanehsatà:ke, as this was the most unambiguous form of exceptional relations, that of warfare. There were
2,650 soldiers deployed...” (2014: 152). And, as Roxanne Dunbar-Ortiz and others have noted, Western imperial powers still refer to “enemy
territories” abroad as “Indian Country” and to “wanted terrorists” as “Geronimo” (2014: 56). I follow the lineages of these Indigenous theorists
who view settler-colonialism as a kind of permanent war, drawing parallels between the so-called everyday violences (displacement, sexual
violence) inflicted upon Indigenous peoples in the US and Canada and the death-delivering reaches of empire embodied by the West more
globally. Or, to echo Mink, the transformer/shapeshifter narrating the events in Mara- cle’s Celia’s Song: “This is war” (2014: 9). For Butler,
there are varying tactics for distributing “precarity” differently, or what she describes as “that politically induced condition in which certain
populations suffer from failing social and economic networks of support,” producing a “maximized precariousness for populations ... who often
have no other option than to appeal to the very state from which they need protec- tion” (2010: 26). In the depictions provided in her writing,
as well as that of Maracle, violence is deployed not only as “an effort to minimize precarious- ness for some and to maximize it for others,” but
also as a mode of shaping the perceptions of citizens in order to make such acts legible, and hence, in a sense justifiable (Butler 2010: 54).
Ultimately what Butler is advocating for is a new ethico-political orientation, one with the potential to disrupt the violent regimes of the
sensible, as well as the ways in which precarity is currently allocated and distributed. Paraphrasing Jacques Rancière, Jeff Derksen also
advocates for political movements that disrupt “regimes of the sensible”: “a politics of the aesthetic could ... redistribute and rethink the
possibility of the subject (potentially an isolated figure) within the present and within a com- munity to come” (2009: 73). In sum, Butler’s text
illustrates the ways in which State-sanctioned (and induced) precarity “perpetuates a way of dividing lives into those that are worth defending,
valuing, and grieving when they are lost, and those that are not quite lives” (2010: 42), as well as the resistive practices that might disrupt the
naturalization of “differential distribution[s] of pre- carity” (xxv). The remainder of the chapter considers to what extent Mara- cle’s texts offer
such a disruption of the mundane frames of settler-colonial war within the context of an exceptional moment (an epidemic), and asks how her
work gestures toward the alternatives that might be offered by Indigenous frames.

My sole contention is eugenics


AI use discriminates based on gender and sexuality—its inherent and
irreparable
Segal 21 (Mark Segal is an American journalist. He is the founder and publisher of Philadelphia Gay News and has won numerous
journalism awards for his column "Mark My Words," including best column by The National Newspaper Association, Suburban Newspaper
Association and The Society of Professional Journalists, Philadelphia Gay News 03-21-2021, “The homophobia of artificial intelligence”, accessed
7/17/2022, https://epgn.com/2021/03/24/the-homophobia-of-artificial-intelligence/)//sfs
We are all now being profiled by A.I. If you’re on social media, most likely part of your profile includes
your face for facial recognition. Facial recognition is already being used by corporations, apartment
buildings and even some airports in the U.S. for security. What if this new technology doesn’t recognize
you, or what if it recognizes you as LGBT? Joy Buolamwini, an M.I.T. student working on an A.I. facial
recognition project, created a program using A.I. that, when you looked in the mirror in the morning,
would show another face that might give you a smile or inspiration, similar to a filter. But the mirror
didn’t recognize her. She did discover, unfortunately, that the mirror worked if she wore a white mask.
Buolamwini is Black, and what she discovered was that there was discrimination in A.I. This led her to
realize that as facial recognition becomes more widely used it would discriminate against dark skinned
people. The basic point is that the code for the program can only be as good as those
writing it, and sometimes those writing the code write in their own prejudices.
How serious of an issue is this? Amazon discovered that they were getting too many resumes to keep
up with by humans, so they brought in an A.I. program to read them and discover which
applicants should receive an in-person interview. Someone discovered along the way that
only a small percentage of women were being recommended by the algorithm, and if
you attended a women’s college or played women’s sports, you didn’t get the
interview at all. That’s gender discrimination… done by a computer program, written by a computer
programmer who might not even realize they are writing in their own biases. So now that we know this
new technology discriminates by race and gender, can it not also discriminate against you
as an LGBT person? Yes, for the same reason. People writing those codes
sometimes do not know their own prejudices. We’ve seen A.I. discriminate against people
by their race and by their gender. How long, do you think, before A.I. discriminates based on sexual
orientation and gender identity? How long will it be before A.I. is used to reject LGBT applicants from
jobs? Or how long it will be before A.I. is used to identify LGBT people in countries where being LGBT is
criminalized? There may be an answer to that question. A Stanford University study of
artificial intelligence utilizing a facial recognition algorithm can better choose
whether an individual is LGBT than people can. The study is controversial in the tech
world, mostly due to those who suggest that facial recognition is based solely on a face structure and
don’t take into account that many facial recognition programs add the information
about those faces that they received from other sources like social media. Looking at
your social media, buying habits and info you’ve given to surveys and employers, do you believe all that
material will not point to who you are? Welcome to the future.

AI research is founded upon outdated eugenic conceptions of intelligence that


privileges male whiteness
Stovall 21 [Natasha Stovall, clinical psychologist with a PhD from Adelphi University, 3-24-2021, "Eugenics Powers IQ and AI", Public
Books, https://www.publicbooks.org/eugenics-powers-iq-and-ai/]/ / kp

In another era of technological innovation and widening inequality, when engineering to revolutionize society was also in vogue, an eerily
familiar cadre of mad tinkerers unleashed their vision for humanity to devastating effect. As anti-Black and anti-immigrant violence shook the
US, these earlier “disruptors” popularized the notion of “intelligence”: the idea that all humans have innate and fixed abilities that can be
accurately assessed, measured, and used to categorize people for a lifetime. Like the wizards of tech today, these disruptors were Very
Smart People, their self-esteem and economic viability reinforced, often since “gifted” childhoods, by over-indexing on traits like verbal
comprehension, perceptual reasoning, processing speed, and working memory. Unquestioningly devoted to their defiantly Eurocentric
perception of the world, these white men implanted their reductive definition of human ability into the cores of the intelligence tests they
created and spread through American society. As culturally specific and scientifically invalid as their definition has been judged in the
intervening century, this idea of “intelligence” continues to be bought and sold as the essence of human reason, the one that defines human
potential and capability through the measurement of “IQ.” Today’s tech industry is the golden child of this “intelligence.” And as sweet as it is
the Very Smart definition of
to possess the diagnosis and paycheck of a Very Smart Person, there is no denying that

“intelligence”—like the DNA of tech itself—is deeply intertwined with the white-ethnic domination
championed throughout the sciences in the 19th and 20th centuries. As we hurtle, ever faster, into a future shaped and slicked over by artificial
intelligence, don’t take your eye off this weight that bears us ever back into the violent racial contradictions of our past and present.
The weight is white, embedded in every supporting document of the institutions where white power and privilege reside. This weight
distorts every turn, every code, every byte of the aggregated and calibrated store-bought “intelligence” that streams hourly through our
fantasy of liberatory technology and self-cleaning robotic reality. The weight is carried in invisible backpacks by bots stocking our shelves and
cleaning our hospitals, algorhythming our shopping lists and harvesting our foods, measuring out our flesh and blood in units of “value” and
guarantees—that our technologies, however clever, never realize their promises
“capacity.” This weight demands—and

of shiny, efficient social equality. This racist weight—putting its thumb on the scale of every facet of our technology—feels eternal.
But it is actually new and malleable, created in a strange, recent, forgotten, and denied history. It is a racist history, yet one that we carry
forward—silently—within one of society’s core concepts: an “ intelligence” that fetishizes speed, efficiency,
and innovation; a narrow, hierarchical “intelligence” that only pays lip service to the “soft skills” that might better cultivate
equity—empathy, creativity, communication, collaboration, altruism. The idea that this “intelligence” (encompassing discrete capacities to sort,
categorize, process, and remember verbal and visual information as accurately and quickly as possible) is a universal and essential human trait,
It is eugenics that secretly sits at the heart of both IQ and AI. This skeletal rendition of
then, goes without saying.

“intelligence,” crucially, entitles and empowers Very Smart People to revolutionize society according to their own needs and
whims, whether political, social, economic, or even emotional. Lost in this neatly reductive understanding is the degree to which this
“intelligence” is not nearly as scientifically valid—nor as essential to the survival of humanity—as its proponents would like us to believe. The
study of “intelligence” emerged during a time like our own, another Saturn-Pluto synod. Then as now, global capitalistic pressures and conflicts
were boiling, and white panic in the face of movements for racial justice and reintegration was creating chaos in the white American soul and
intelligence was birthed from a tricky, shape-shifting alliance between archetypical
polity. Like tech, the study of

white heroes and villains: nerdy innovators, earnest “helpers,” ruthless social Darwinists angling for power and wealth, and
various amalgams of the three. To a powerful segment of 20th-century (and 21st-century) psychologists and other scientists—and their
institutional benefactors—the idea that every human has a comparable innate intelligence, which could be developed through effort and
appropriate instruction, was anathema to these elites and their goals for American society. The last century of European and American scientific
study of intelligence is synonymous with the championing of the superiority of white intelligence. The project of using emerging
statistical and psychometric methods to reinforce the racialized notion that all humans are not created equal

became eugenics. Its influence over our 21st-century self-image of intelligence and humanity is

profound but also hidden. It is eugenics that secretly sits at the heart of not only IQ but AI. Unless there are radical changes, the
next century will bring the championing of the superiority of artificial white
intelligence, and the reification of its power. The father of Silicon Valley founder Frederick Terman was one of the most
celebrated psychologists in American history, Lewis Terman. From his half-century perch at Stanford University, the elder Terman vigorously
injected the idea of a measurable, scientifically validated, racialized “intelligence” into the American consciousness. Terman’s lifetime of
writings and advocacy makes clear the racist underpinnings of his ideas. He was deeply invested in a fantasy of hierarchical intelligence in which
“gifted,” mostly white children are groomed for leadership and influence, and everyone else is slotted into supporting roles in the industrial
machine. Intelligence, as Terman conceived it in 1916, is narrow, fixed, hereditary, and variable across ethnic groups, and always higher among
white and more affluent children. This notion of intelligence lives on robustly in the IQ tests and other standardized measures that
psychologists, educators, and employers use today. Despite the scientific community’s eventual rejection of overt racism—something that
Terman himself never acknowledged or accounted for—Terman’s contributions continue to shape our popular and scientific understandings of
human ability, more than half a century after his death. Terman apologists today focus on his devotion to a meritocracy built on measurable
Terman’s work has been the reproduction of the old paradigm of
ability. But the most statistically reliable element of

social dominance. This paradigm is encoded in a categorical bias toward the mentally
“strong” over the mentally “weak”: “white” over “black,” “native” over immigrant,” “rich” over “poor,” “male” over “female.”
Terman’s long tenure at Stanford, and his family’s ongoing intellectual influence in Silicon Valley, extend his legacy into new questions around
AI, as conceived today, could do
artificial intelligence. And the continuing dominance of Terman’s “intelligence” forces us to ask how

anything but reinforce Terman’s false promises of a whites-on-top meritocracy —


accompanied by just enough “diversity and inclusion” exceptions to prove the rule. Follow the template that Terman and colleagues laid down:
that human intelligence is universal, hierarchical, measurable. Next, posit that human intelligence is the superior intelligence. Should you follow
artificial intelligence in the image of the human brain—then what we will eternally return to is an
this logic—and create

“intelligence” that privileges whiteness. Social history is also family history. And the Terman family history is the story of how blind
whiteness can be to itself, and to its unconscious devotion to social domination coated in the gilt of grandiose altruism. Lewis Terman told himself and the world that he was using statistical
methods to understand why some children are more “gifted” than others, and thus helping them to help make the world a better place. What Terman was actually searching for was a
rationale for why he, as a white northern European man born into a family of modest means in the early 20th century, was given the keys to the kingdom. What he found was an explanation of
how he, and people like him, think, and why they believe that their way of thinking makes their people superior to all others. Armed with that information, Terman and his family transformed
the world to their own benefit, but not everyone else’s. The Terman family arc crystalizes the many ways that tech and intelligence resonate on a wide historical frequency. But, consequently,
their family history is also the key to understanding how eugenic ideas of intelligence lie in wait, ready to sabotage any attempt to undo tech’s inborn legacies of racialized bias and inequity. In
the last century, Terman’s signature Stanford-Binet Intelligence Scales have been used for identifying learning disorders; justifying the limitation, marginalization, and even termination of
thousands of lives though forced sterilization; educational and professional tracking; and restricting immigration from “undesirable” countries. Yet Terman’s own family faced no such
limitations. Indeed, Terman’s fetishization and defense of “gifted” white children was as much a personal origin story, rooted in the racially segregated social isolation he experienced as a
nerdy kid in late-1800s rural Indiana, as scientific truth. In one generation, Terman’s Scotch-Irish tenant-farmer family was unshackled and transformed, from “backward” (Terman’s word) to
affluent and influential on a scale that endures today. Even as Lewis Terman and his students were limiting the opportunities of those they considered “dull” due to “the family stocks from
which they come,” Terman’s own family was rocketing through the social and economic ranks of postwar white America. Their trajectory—and Terman’s “scientific” explanation of its basis in
“giftedness”—mapped perfectly onto the shared desires of 20th-century white Americans. What they sought was a “rational” explanation for their undemocratic urge to pull the ladder up
behind them and hoard resources and opportunities from immigrants more recently arrived than themselves. “I know of nothing in my ancestry that would have led anyone to predict for me
an intellectual career,” Terman later wrote. “A statistical study of my forebears would have suggested rather that I was destined to spend my life on a farm or as the manager of a small
business, and that my education would probably stop with high school graduation or earlier.” Terman was one of 14 children; his father had attended school only a few months a year.
Terman’s own children and grandchildren had a very different fate. Terman’s son, Fred, studied electrical engineering at Stanford, became dean of the engineering school and then university
provost, and paved the way for the creation of Silicon Valley. Terman’s grandson Lew followed in his father’s footsteps, receiving multiple engineering degrees at Stanford and spending four
decades in research at IBM, eventually becoming president of the company’s Academy of Technology. (Fred Terman publicly neither embraced nor rejected his father’s eugenicist views. Even
so, his fellow Silicon Valley founder, Nobel laureate and Stanford professor William Shockley, passionately defended eugenics until his death in 1989. Shockley’s adherence to Terman’s
eugenicist views had an ironic twist: he was tested as an elementary schooler by Terman’s researchers, but his scores were too low to qualify as “gifted.”) Terman continues to enjoy the
support of surprisingly prominent academic apologists; they defend Terman’s core theories, while maintaining that eugenicists were a product of “their time.” Despite such shameless support
in the present day, there was robust contemporary resistance to Terman’s ideas in his own time. “I hate the impudence of a claim that in fifty minutes you can judge and classify a human
being’s predestined fitness in life,” wrote journalist Walter Lippmann in 1922, debating Terman in the New Republic. “I hate the abuse of the scientific method which it involves. … I hate the
sense of superiority which it creates and the sense of inferiority which it imposes.” Lippman’s passionate critique was taken up, in the decades that followed, by psychologists and educators,
who were struck by the severe limitations of the eugenics-inspired understanding of intelligence. Unsurprisingly, conceptualizing intelligence in a way that privileges a narrow style of “gifted,”
generally white and European information processing doesn’t do much to address the naturally occurring diversity of human learning styles. Nor does such a concept rectify the economic and
racial inequities that are themselves byproducts of the biases underpinning a eugenic definition of intelligence. Starting in the 1970s, a field of countertheories of intelligence bloomed, planted
by superstar academic researchers like Howard Gardner, Robert Sternberg, and Daniel Goleman. There are now books and papers and theories galore that counter Terman’s narrow vision.
These champion emotional intelligence; triarchic intelligence; linguistic, logical-mathematical, musical, bodily-kinesthetic, spatial, interpersonal, and intrapersonal intelligence. All these ideas
bring us closer to a more holistic and complete understanding of human capacity. And yet, in the most impactful ways, Terman and the eugenicists won the debate on intelligence. They are
still winning. Terman’s theories, and the culture of intelligence testing and measurement they inspired, dominate our practical understanding and application of human ability. And they do so
even as the aspirational language of civil rights, multiculturalism, diversity, access, and equity permeates our social discourse. The instruments and practices derived from Terman’s work—IQ
tests, standardized tests (including the SAT), Gifted and Talented programs—continue to promote and solidify a hierarchical distribution of power. This hierarchy is grounded in a definition of
human value that repetitively and predictably exalts and damns various segments of the population. We have cleansed the racial language from the study of intelligence—and eugenicists from
the history of psychology and the sciences—but left the eugenic core unscathed in theory and practice. The result is still eugenics, but without eugenicists. Just as sociologist Eduardo Bonilla-
Silva explained that “racism without racists” still reproduces a racially unjust society, “eugenics without eugenicists” produces inequitable hierarchies of power. Until there is a full recognition,
reckoning, and repair of the racist origins of intelligence theory—root and branch—any application of Terman’s theories of intelligence will inevitably achieve his desired result.
The current AGI arms race leads to corner cutting and misalignment, means
even removal of bias fails.
Tibebu 25 [Haileleol Tibebu, assistant professor at the University of Illinois, 1-29-2025, "DeepSeek and the Race to AGI: How Global AI
Competition Puts Ethical Accountability at Risk", Tech Policy Press, https://www.techpolicy.press/deepseek-and-the-race-to-agi-how-global-ai-
competition-puts-ethical-accountability-at-risk/]/Kankee

strategic pressures drive


3. The Risks of Unchecked AI Deployment With increasing investments in AI, economic and

companies to deploy AI technologies before they are fully tested. The pressure to commercialize AI
often means cutting corners on ethical considerations. The focus on cost efficiency (as seen in companies like DeepSeek) may compromise
AI models could be weaponized for
safeguards against bias, misinformation, and malicious use Without global oversight,

disinformation, surveillance, or cyber warfare, further escalating geopolitical tensions . A sustainable AI future requires:
Global AI governance frameworks that ensure AI is developed and deployed ethically across all nations. Transparency standards that compel AI
prevent an uncontrolled AI
companies to disclose their models’ risks and limitations. International collaboration to

arms race. The world is at a crossroads. The AI race between the West and China has
transformed from a technological competition into a strategic battle for global influence. In this
contest, ethical AI development remains an afterthought , overshadowed by national

ambitions and corporate profits. The real challenge is not just in building powerful AI systems but in ensuring they align with human
values. Without global cooperation and stronger ethical commitments, the consequences of unchecked AI growth

could be irreversible. The pursuit of AGI must not become a reckless competition
where responsibility is sacrificed for speed. The question the world must answer is not who will reach AGI
first, but whether we are prepared to handle its consequences.

The impacts are two fold:


A] Digital structural violence is embedded in society making the damage more
subtle and difficult to repair. Failure to confront bias in AI, makes structural
violence appear ethical
Winters, Niall.et al. 19 “Can we avoid digital
structural violence in future learning ...” Typeset.io, 25 December 2019, https://typeset.io/papers/can-we-avoid-digital-structural-violence-in-
future-learning-26ri8bf9g4.

Accessed december 8, 2022


The aim of this paper is to discuss the potential

negative consequences of artificial intelligence in learning, through a novel


conceptualisation we term digital structural violence. We propose

a research agenda on artificial intelligence (AI) in future learning systems. This is designed specifically to address
the
needs of the most marginalised, as they are the group who will suffer the most
from digital structural violence. By the 2020s, many are
predicting that AI will be commonplace in technology enhanced learning (Luckin

et al., 2016) and although the future is likely to be complex, we believe that

AI will likely be a very relevant consideration in the learning landscape.


While predicting the future is always a fraught exercise, our examination of

what digital structural violence might look like over the next 10 years is

based on a concrete extrapolation from existing research on (a) digital

inequality, (b) social exclusion from educational opportunities and (c)

technical developments in artificial intelligence. While the However, a critical disadvantage of pattern-based approaches is
that they are open to serious biases as they reflect the data they were trained with. While there is an absence of
studies of bias in research related to education and technology, this has been an active area of research in other
sectors: For example, studies have highlighted gender and racial biases in
systems for career development and systems built to predict future criminal s (e.g.
Angwin et al., 2016; Kay et al., 2015), with a 2017 paper in Science reporting that datasets often contain “accurate imprints of our historic
biases, whether morally neutral as toward insects or flowers, [or] problematic as toward race or gender” (Caliskan, Bryson & Narayanan, 2017).
Facial recognition software has been shown to be significantly biased towards being able to recognise
male faces, and men and women with the lightest skin tones (B uolamwini and Gebru, 2018); search engine
results similarly privilege white men, and discriminate against people of colour, particularly women

(Noble, 2018); and there are many concerns around the digital inequalities

rendered around issues of gender (e.g. the default settings of Amazon’s female

assistant Alexa) (Eynon, 2018). 2. Structural violence “Structural violence” is a term coined by Galtung (1969) to refer to
the structural and institutional constraints placed on (marginalised) people, which prevent them from living the life
they value. Any social structure, be it economic, political, educational or cultural, which prevents individuals
from expressing
themselves and developing their full potential, can accordingly be
conceptualized as structural
violence. Embedded in social
structures, such violence is often normalised by stable
institutions and reproduced within established cultural practices rooted in a range of political, legal
and economic systems (DuNann
Winter & Leighton, 2001). At its most extreme, structural

violence leads to significant suffering and death through, for example,

limitations placed on access to medicine or healthcare. However, violence can

be embedded in a wide range of social structures that lead to varied forms of

discrimination and disadvantage related to, for example, race, gender, age,

socio-economic status (SES), geographical location etc. As argued by DuNann,

Winter and Leighton, structural violence produces suffering just as much as

direct violence, but the ‘damage is slower, more subtle, more common, and more

difficult to repair’ (3. New approaches to tackling digital


structural violence It is possible that as AI
becomes increasing normalised,
digital structural violence will not be seen as deviant but instead will be
“defined as moral in the service of conventional norms and material interests ”
(Bourgois and Scheper-Hughes, 2004, p. 318). It is therefore important that as researchers and

practitioners, we confront the challenges of


contributing to building an alternative future, where the disadvantages
unleashed by digital
structural violence are addressed as early as possible . In order to examine how we build the material
and conceptual resources to combat digital structural violence, we suggest three potential avenues of future research: 1. Use the concept of
epistemic privilege to theorise the inclusion of marginalised learners in the design of learning systems. To pragmatically support this, we
suggest using participatory action research and emancipatory methodologies to pragmatically ensure this happens; 2. Support young learners
and teachers to understand and build their own artificial intelligence algorithms; 3. Develop sustainable interdisciplinary links to computer scientists to address digital structural
violence at the algorithmic level and to make the societal implications and

B] Racist AI exacerbates policy brutality and inequality


Bailey et al., 20 (Jane Bailey, Jacquelyn Burkell, and Valerie Steeves , 9-2-2020, accessed on 7-20-2022, RSC College of New Scholars,
"AI Technologies - Like Police Facial Recognition - Discriminate Against People of Colour | The Royal Society of Canada", https://rsc-
src.ca/en/voices/ai-technologies-like-police-facial-recognition-discriminate-against-people-colour ) kp
Predictive policing uses algorithmic processing of historical data to predict when and where new
crimes are likely to occur, assigns police resources accordingly and embeds enhanced police
surveillance into communities, usually in lower-income and racialized neighbourhoods. This
increases the chances that any criminal activity — including less serious criminal
activity that might otherwise prompt no police response — will be detected and punished,
ultimately limiting the life chances of the people who live within that environment . And the evidence of
inequities in other sectors continues to mount. Hundreds of students in the United Kingdom protested on Aug. 16 against the disastrous results
of Ofqual, a flawed algorithm the U.K. government used to determine which students would qualify for university. In 2019, Facebook’s

microtargeting ad service helped dozens of public and private sector employers exclude
people from receiving job ads on the basis of age and gender. Research conducted by
ProPublica has documented race-based price discrimination for online products. And search
engines regularly produce racist and sexist results. These outcomes matter
because they perpetuate and deepen pre-existing inequalities based on
characteristics like race, gender and age. They also matter because they deeply affect
how we come to know ourselves and the world around us, sometimes by pre-selecting
the information we receive in ways that reinforce stereotypical perceptions . Even technology companies
themselves acknowledge the urgency of stopping algorithms from perpetuating discrimination. To date the success of ad hoc investigations,
conducted by the tech companies themselves, has been inconsistent. Occasionally, corporations involved in producing discriminatory systems
withdraw them from the market, such as when Clearview AI announced it would no longer offer facial recognition technology in Canada. But
often such decisions result from regulatory scrutiny or public outcry only after members of equality-seeking communities have already been
harmed. It’s time to give our regulatory institutions the tools they need to address the problem. Simple privacy protections that hinge on
obtaining individual consent to enable data to be captured and repurposed by companies cannot be separated from the discriminatory
outcomes of that use. This is especially true in an era when most of us (including technology companies themselves) cannot fully understand
what algorithms do or why they produce specific results.

You might also like