0% found this document useful (0 votes)
39 views6 pages

STS Module Chapter 6

Chapter 6 discusses the ethical implications of unchecked advancements in science and technology, drawing on William Nelson Joy's concerns about the potential dangers posed by robotics, genetics, and nanotechnology. It highlights the importance of addressing ethical dilemmas in technology, such as privacy issues, misinformation, and the responsibilities of companies in data governance. The chapter emphasizes the need for a moral relationship between technology and users to ensure that technological progress does not compromise human rights.

Uploaded by

christine Ramos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views6 pages

STS Module Chapter 6

Chapter 6 discusses the ethical implications of unchecked advancements in science and technology, drawing on William Nelson Joy's concerns about the potential dangers posed by robotics, genetics, and nanotechnology. It highlights the importance of addressing ethical dilemmas in technology, such as privacy issues, misinformation, and the responsibilities of companies in data governance. The chapter emphasizes the need for a moral relationship between technology and users to ensure that technological progress does not compromise human rights.

Uploaded by

christine Ramos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

CHAPTER 6: WHEN TECHNOLOGY AND HUMANITY CROSS

OVERVIEW

This unit tackles the danger posed by Science and Technology unchecked by moral
and ethical standards. It primarily draws insights given by William Nelson Joy (200) article
“Why the future does not need us on evaluating contemporary human experience in the
midst of rapid development in Science and Technology. Such experience will be discussed
to see whether it strengthen and enlightens human person functioning in society or not .

LEARNING OBJECTIVES

At the end of this unit, I am able to:

1. Examine human rights in order to uphold such rights in technological ethical dilemmas

2. Evaluate contemporary human experience in order to strengthen and enlighten the


human person functioning in society

ACTIVATING PRIOR KNOWLEDGE

Look at the picture below. Do you think that there will come at a time in the future that
will no longer need human? Write your brief opinion on the space provided

Source:https://www.123rf.com/photo_93167996_3d-rendering-humanoid-robots-working-with-
headset-and-notebook.html
EXPANDING YOUR KNOWLEDGE

William Nelson Joy (born November 8, 1954),


commonly known as Bill Joy, is an American computer
scientist. Joy co-founded Sun Microsystems in 1982. He is
widely known for having written the essay "Why the future
doesn't need us", where he expresses deep concerns over
the development of modern technologies.

For some, imagining a future without humans is


nearly synonymous too the end of the world. Many
choose not to speculate about a future where humans cease
to exist while the world remains. However, a dystopian
society void of human presence is the subject of many
works in literature and film. The possibility of such society
is also a constant topic of debates. In April 2000, William Nelson Joy, an American
computer scientist and chief scientist of Sun Microsystems, wrote an article for Wired
magazine entitled Why the future doesn’t need us? In his article, Joy warned against
the rapid rise of new technologies. He explained that 21st-century technologies—
genetics, nanotechnology, and robotics (GNR)— are becoming very powerful that they
can potentially bring about new classes of accidents, threats, and abuses. He further
warned that these dangers are even more pressing because they do not require
large facilities or even rare raw materials—knowledge alone will make them potentially
harmful to humans

Joy argued that robotics, genetic engineering, and nanotechnology pose


much greater threats than technological developments that have come before. He
particularly cited the ability of nanobots to self-replicate, which could quickly get out of
control. In the article, he cautioned humans against overdependence on machines. He
also stated that if machines are given the capacity to decide on their own, it will be
impossible to predict how they might behave in the future. In this case, the fate of human
race would be at the mercy of machines. Joy also voice out his apprehension about the
rapid increase of computer power. He was also concerned that computers will
eventually become more intelligent than humans, thus ushering societies into
dystopian visions, such as robot rebellions. To illuminate his concern, Joy drew from
Theodore Kaczynski’s book, Unabomber Manifesto, where Kaczynski described that the
unintended consequences of the design and use of technology are clearly related to
Murphy’sLaw: “Anything that can go wrong, will go wrong.” Kaczynski argued further that
overreliance on antibiotics led to the great paradox of emerging antibiotic resistance
strains of dangerous bacteria. Th introduction of DDT (dichloro-diphenyl-trichloroethane)
to Combat malarial mosquitos for instance only gave rise to malaria parasites with
multi drug resistant genes

Joy argues that developing technologies provide a much greater danger to humanity
than any technology before it has ever presented. In particular, he focuses on genetics,
nanotechnology and robotics. He argues that 20th century technologies of destruction
such as the nuclear bomb were limited to large governments, due to the complexity and
cost of such devices, as well as the difficulty in acquiring the required materials. He uses
the novel The White Plague as a potential nightmare scenario, in which a mad scientist
creates a virus capable of wiping out humanity.
Joy also voices concern about increasing computer power. His worry is that
computers will eventually become more intelligent than we are, leading to such dystopian
scenarios as robot rebellion. He notably quotes the Ted Kaczynski (the Unabomber) on this
topic

Ethics in Technology

Unlike business ethics, ethical technology is about ensuring there is a moral


relationship that exists between technology and users.

Emerging ethical dilemmas in science and technology


New ethical problems regarding the use of science and technology are always arising.
When is it right to use science and technology to apply to real-life scenarios and when does
it impede human rights?

 Health tracking and the digital twin dilemma: Should organizations be able to create
your twin in code and experiment on it to advance healthcare initiatives? And when
does that become a practice of exploitation?
 Neurotechnology and privacy: Neurotechnology is nothing new, but new advances
allowing the use of technology to gradually change behavior or t hought patterns
poses severe questions about privacy.
 Genetic engineering: While possessing great potential for human health and the
recovery from damaging genetic mutations, there are considerable ethical
considerations that surround the editing of the human genome.
 Weaponization of technology: While there is a lessened chance for loss of life, there
are sincere ethical problems with weaponizing technology. At what point do we trust
our technology to fight a war for us?

Ethical decisions in technology should not be taken lightly. If we believe that


technology can help to solve the world’s problems, addressing the ethics involved is the
only way to us get there.

5 Ethical Issues in Technology to Watch for in 2021


(retrieved from Ashley Watters, July 01,2021 , https://connect.comptia.org/blog/ethical-
issues-in-technology)

1. Misuse of Personal Information

One of the primary ethical dilemmas in our technologically empowered age revolves
around how businesses use personal information. As we browse internet sites, make online
purchases, enter our information on websites, engage with different businesses online and
participate in social media, we are constantly providing personal details. Companies often
gather information to hyper-personalize our online experiences, but to what extent is that
information actually impeding our right to privacy?
Personal information is the new gold, as the saying goes. We have commoditized data
because of the value it provides to businesses attempting to reach their consumer base. But
when does it go too far? For businesses, it’s extremely valuable to know what kind of
products are being searched for and what type of content people are consuming the most.
For political figures, it’s important to know what kind of social or legal issues are getting
the most attention. These valuable data points are often exploited so that businesses or
entities can make money or advance their goals. Facebook in particular has come under fire
several times over the years for selling personal data it gathers on its platform.

2. Misinformation and Deep Fakes

One thing that became evident during the 2016 and 2020 U.S. presidential elections was
the potential of misinformation to gain a wider support base. The effect created
polarization that has had wide-reaching effects on global economic and political
environments.

In contrast to how information was accessed prior to the internet, we are constantly
flooded with real-time events and news as it breaks. Celebrities and political figures can
disseminate opinions on social media without fact checking, which is then aggregated and
further spread despite its accuracy—or inaccuracy. Information no longer undergoes the
strenuous validation process that we formerly used to publish newspapers and books.

Similarly, we used to believe that video told a story that was undeniably rooted in truth.
But deepfake technology now allows such a sophisticated manipulation of digital imagery
that people appear to be saying and doing things that never happened. The potential for
privacy invasion and misuse of identity is very high with the use of this technology.

3. Lack of Oversight and Acceptance of Responsibility

Most companies operate with a hybrid stack, comprised of a blend of third-party and
owned technology. As a result, there is often some confusion about where responsibility
lies when it comes to governance, use of big data, cybersecurity concerns and managing
personally identifiable information or PII. Whose responsibility is it really to ensure data is
protected? If you engage a third party for software that processes payments, do you bear
any responsibility if credit card details are breached? The fact is that it’s everyone’s job.
Businesses need to adopt a perspective where all collective parties share responsibility.

Similarly, many experts lobby for a global approach to governance, arguing that local
policing is resulting in fractured policy making and a widespread mismanagement of data.
Similar to climate change, we need to band together if we truly want to see improvement.

4. Use of AI

Artificial intelligence certainly offers great business potential. But, at what point do AI
systems cross an ethical line into dangerous territory?

 Facial recognition: Use of software to find individuals can quickly become a less-than-
ethical problem. According to the NY Times, there are various concerns about facial
recognition, such as misuse, racial bias and restriction of personal freedoms. The
ability to track movements and activity quickly morphs into a lack of privacy. Facial
recognition also isn’t foolproof and can create bias in certain situations.
 Replacement of jobs: While this is anticipated to a certain degree, AI is meant to
increase automation of low-level tasks in many situations so that human resources
can be used on more strategic initiatives and complicated job duties. The large-scale
elimination of jobs has many workers concerned about job security, but AI is more
likely to lead to job creation.
 Health tracking: The pandemic brought contact tracing into the mainstream. Is it
ethical to track the health status of people and how will that impact the limitations
we place on them?
 Bias in AI technology: Technology is built by programmers and inherits the bias of its
creators because humans inherently have bias. “Technology is inherently flawed.
Does it even matter who developed the algorithms? AI systems learn to make
decisions based on training and coding data, which can be tainted by human bias or
reflect historical or social inequities,” according to Forbes. Leading AI developer
Google has even experienced an issue where AI software believes male nurses and
female historians do not exist.

5. Autonomous Technology

Self-driving cars, robotic weapons and drones for service are no longer a thing of the future
— they’re a thing of the present and they come with ethical dilemmas. Robotic machines in
place of human soldiers is a very real possibility, along with self-driving cars and package
delivery via unmanned drone.

Autonomous technology packs a punch when it comes to business potential, but there is
significant concern that comes with allowing programmed technology to operate seemingly
without needed oversight. It’s a frequently mentioned ethical concern that we trust our
technology too much without fully understanding it.

You might also like