0% found this document useful (0 votes)
34 views40 pages

Chapter-1-Part-B-Revisiting AI Project Cycle & Ethical Frameworks For AI

The document discusses the AI Project Cycle and the importance of ethical frameworks in AI, highlighting learning objectives such as understanding AI ethics, critical thinking, and collaborative skills. It addresses various moral issues related to AI technologies, including self-driving cars, data privacy, AI bias, and the implications of AI on employment and children. The document emphasizes the need for ethical guidelines to ensure fairness, accountability, and transparency in AI systems to mitigate risks and promote human well-being.

Uploaded by

divyam.jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views40 pages

Chapter-1-Part-B-Revisiting AI Project Cycle & Ethical Frameworks For AI

The document discusses the AI Project Cycle and the importance of ethical frameworks in AI, highlighting learning objectives such as understanding AI ethics, critical thinking, and collaborative skills. It addresses various moral issues related to AI technologies, including self-driving cars, data privacy, AI bias, and the implications of AI on employment and children. The document emphasizes the need for ethical guidelines to ensure fairness, accountability, and transparency in AI systems to mitigate risks and promote human well-being.

Uploaded by

divyam.jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Subject: Artificial Intelligence

Subject Code: 417 (Skill Subject)


Grade-10

Chapter-1-Part-B-
Revisiting AI Project
Cycle & Ethical
Frameworks for AI
NIRAV BHINGRADIYA
At a Glance of Chapter
Learning Objectives
 Understanding the AI Project Cycle
 Exploring Ethical Frameworks in AI
 Application of Ethical Guidelines
 Critical Thinking and Real-World Application
 Collaborative Skills and Ethical Responsibility

NIRAV BHINGRADIYA
At a Glance of Chapter
Common Mistakes
1. Why Ethical Frameworks are needed in AI?
2. Explain the principle of Autonomy in bioethics. How does it apply to decision-
making in healthcare, particularly in scenarios involving AI systems?
3. Why is it important to ensure transparency and explainability in AI systems?

NIRAV BHINGRADIYA
Introduction
 AI technologies indeed have the power to change our society. As we rely more
and more on AI technologies for decision-making, it is important to consider the
ethical implications associated with these technologies. (Page no-131 to 136 in
textbook Part-B)
 AI Ethics:- AI ethics refers to the moral and ethical considerations surrounding
the development and use of AI technologies.
 It addresses the ethical concerns and challenges that arise from the capabilities
and potential impact of AI technologies on individuals, society and the
environment.

NIRAV BHINGRADIYA
Moral Issues: Self Driving Cars
 Self driving cars are an exciting new technology with the potential to
revolutionized transportation and make our roads safer.
 Self-driving cars involve the use of almost all AI technologies including
computer vision, NLP and robotics.
 However, there are several important moral and ethical issues associated with
this technology that we need to consider.
 Responsibility
 Decisions in potentially dangerous situations
 Impact that self-driving cars could have on jobs

NIRAV BHINGRADIYA
Moral Issues: Self Driving Cars
 Let’s do the activity to understand the Moral Issues to Self Driving Car.
 Moral Machine is an online platform that was developed by researchers at the
Massachusetts Institute of Technology to gather public opinion on how self-
driving cars should be programmed to handle ethical dilemmas.
 https://www.moralmachine.net/

NIRAV BHINGRADIYA
Data Privacy
 Data Privacy is another ethical concern related to the increased use of AI
technologies.
 Recall that all artificial intelligence technologies use training data. This data can
be sensitive and private and it is important to ensure that it is adequately
protected and used in a responsible manner.
 Consider a simple example of collecting a user’s private data by smartphone
apps for providing personalized recommendations.

NIRAV BHINGRADIYA
Data Privacy
 When you sign up for a music app like Spotify or Apple Music, you provide some
basic info like your age and music preferences.
 The music app keeps track of what songs you listen to, playlists you make and
the time of the day you play songs.
 The app then uses AI base algorithms to analyze your data and find patterns.
 The app also compares your preferences to other users who like similar music.
 Based on the comparison and your listening habits, the app suggests or
‘recommends’ new songs or artists you might like.

NIRAV BHINGRADIYA
Data Privacy
 Most of the data collected by AI algorithms is safe but sometimes if any
personal and sensitive data, like credit cards, health reports or identity is leaked
during a security breach, it can cause serious problems for the concerned
individual.
 Additionally, AI technologies can be used for surveillance, such as facial
recognition or tracking people’s online activity.
 It is important to ensure safeguards against AI technologies in a way that their
development respects people’s right to privacy. (Test your knowledge – Pg. No. 133 – Part-B-Textbook)

NIRAV BHINGRADIYA
AI Bias
 Another aspect to AI Ethics is bias. Everyone has a bias of their own no matter
how much one tries to be unbiased, we in some way or the other have our own
biases even towards smaller things. Biases are not negative all the time.
Sometimes, it is required to have a bias to control a situation and keep things
working.
 When we talk about a machine, we know that it is artificial and cannot think on
its own. It can have intelligence, but we cannot expect a machine to have any
biases of its own. Any bias can transfer from the developer to the machine
while the algorithm is being developed. Let us look at some of the examples:

NIRAV BHINGRADIYA
AI Bias
1. Majorly, all the virtual assistants have a female voice. It is only now that some
companies have understood this bias and have started giving options for male
voices but since the virtual assistants came into practice, female voices are always
preferred for them over any other voice. Can you think of some reasons for this?
___________________________________________________________________
2. If you search on Google for salons, the first few searches are mostly for female
salons. This is based on the assumption that if a person is searching fora salon, in
all probability it would be a female. Do you think this is a bias? If yes, then is it a
Negative bias or Positive one?
___________________________________________________________________
Various other biases are also found in various systems which
are not thought up by the machine but have got transferred from the
developer intentionally or unintentionally. NIRAV BHINGRADIYA
Case Study
 One popular real-life case of AI bias is the Amazon recruiting tool incident.
 Amazon developed an AI-based recruiting tool to help screen job applicants.
However, the tool showed bias against women candidates, consistently rating
their resumes lower as compared to male candidates.
 The details of the Amazon AI bias can be seen at the link given below:

https://www.youtube.com/watch?v=QvRZuHQBTps

NIRAV BHINGRADIYA
AI Access
 Since Artificial Intelligence is still a budding technology, not everyone has the
opportunity to access it. The people who can afford AI enabled devices make
the most of it while others who cannot are left behind. Because of this, a gap
has emerged between these two classes of people and it gets widened with the
rapid advancement of technology. Let us understand this with the help of some
examples:

NIRAV BHINGRADIYA
AI creates unemployment
 AI is making people’s lives easier. Most of the things nowadays are done in just a few
clicks. In no time AI will manage to be able to do all the laborious tasks which we
humans have been doing since long. Maybe in the coming years, AI enabled machines
will replace all the people who work as laborers. This may start an era of mass
unemployment where people having little or no skills may be left without jobs and
others who keep up with their skills according to what is required, will flourish.
 This brings us to a crossroads. On one hand where AI is advancing and improving the
lives of people by working for them and doing some of their tasks, the other hand
points towards the lives of people who are dependent on laborious jobs and are not
skilled to do anything else.

NIRAV BHINGRADIYA
AI creates unemployment
 Should AI replace laborious jobs? Is there an alternative for major
unemployment?
___________________________________________________________________
 Should AI not replace laborious jobs? Will the lives of people improve if they
keep on being unskilled?
___________________________________________________________________
Here, we need to understand that to overcome such an issue, one needs to be
open to changes. As technology is advancing with time, humans need to make sure
that they are a step ahead and understand this technology with its pros and cons.

NIRAV BHINGRADIYA
AI : Is it good for children?
 As we all can see, kids nowadays are smart enough to understand technology from a very
early age. As their thinking capabilities increase, they start becoming techno-savvy and
eventually they learn everything more easily than an adult. But should technology be given
to children so young?
 Consider this: A young boy in class 3 has got some Maths homework to finish. He is sitting at
a table which has the Google chat bot - Alexa on it, and he is struggling with his homework.
Soon, he starts asking Alexa to answer all his questions. Alexa replies with answers and the
boy simply writes them down in his notebook.
 While this scenario seems funny, it still has some concerns related to it. On one hand where it
is good that the boy knows how to use technology effectively, on the other hand he uses it to
complete his homework without really learning anything since he is not
applying his brain to solve the Math problems. So, while he is smart,
he might not be getting educated properly.
NIRAV BHINGRADIYA
AI : Is it good for children?
 Is it ethical to let the boy use technology to help in this manlier?
___________________________________________________________________
Conclusion
 Despite AI’s promises to bring forth new opportunities, there are certain
associated risks that need to be mitigated appropriately and effectively.
To give a better perspective, the ecosystem and the socio-technical
environment in which the AI systems are embedded needs to be more
trustworthy.
 Test Your Knowledge (Textbook Page. No. – 136 –Part-B)

NIRAV BHINGRADIYA
NEED FOR ETHICAL FRAMEWORKS
 An ethical framework is a structured set of guidelines or principles that help
individuals and organizations make morally responsible decisions. These
frameworks provide a foundation for assessing what is right or wrong, to help
us ensure fairness, accountability and transparency while making decisions in
various fields, including business, healthcare and technology.

NIRAV BHINGRADIYA
Why Ethical Frameworks are needed in AI
 With the rapid adoption of AI systems across the globe and their significant
impact on society, it has become essential to ensure that these systems are
designed ethically and used responsibly. It must also be ensured that the
systems:
 Prevent Bias and Discrimination
 Ensure Transparency and Explainability
 Protect Privacy and Data Security
 Promote Accountability
 Enhance Human Well-being
 Without ethical frameworks, AI can lead to unintended consequences like
biased hiring systems, misinformation spread or abuse of privilege. Ethical AI
ensures trust, fairness and
safety in technology use.
The Bias inside Us
 Let us explore how personal biases influence decision-making using an online
activity available on
 https://my-goodness.net/
The Bias inside Us
 To begin, play the MyGoodness game and complete the 10 giving decisions. Pay
attention to how you make choices, especially when some details are hidden.
How did you choose whom to give?
 1. Did you prefer giving to certain people, causes or locations?
 2. Did hidden information affect your choices?
 3. Were your decisions based on emotions, personal experiences, or
assumptions?
 AI systems learn from human data, which may include biases like favouring
certain groups overlooking hidden factors or making assumptions based on
incomplete information. Just as you made decisions based on limited data, AI
can also develop biases depending on how it is trained.
The Bias inside Us
 Factors affecting human decision-making:
 Personal and Emotional Factors
 Perception of Need and Impact
 Bias in Human vs Non-Human Considerations
 Geographic and Demographic Biases
 Religious and Ethical Views
 Transparency and Trust
The Bias inside Us
Classification of Ethical Frameworks in AI
 AI ethics can be broadly classified into sector-based and value-based
frameworks. Both approaches are important and provide different ways to
address ethical concerns in AI decision-making.
Classification of Ethical Frameworks in AI
 Sector-based Ethical Frameworks
 These frameworks apply ethical principles to specific industries where AI is used
and help us tackle unique challenges in each field. For example—
• Bioethics: Ensures AI in healthcare respects patient privacy, fairness and
autonomy.
• Business Ethics: Prevents bias and promotes transparency in hiring, lending and
customer interactions.
• Legal and Justice Ethics: Ensures fairness and accountability in AI-assisted law
enforcement and court decisions.
• Environmental Ethics: Examines AI’s impact on sustainability, climate change and
nature conservation.
Classification of Ethical Frameworks in AI
 Value-Based Ethical Frameworks
 These frameworks focus on core moral values that guide AI decision-making
across all sectors. They reflect human values in AI-driven choices and are
categorized as:
• Rights-based Ethics: Protects fundamental human rights such as privacy, dignity
and freedom. It ensures AI prioritizes human lives and treats individuals fairly.
• Utility-based Ethics: Aims to maximize overall good by evaluating AI decisions
based on their impact. It prioritizes solutions that benefit most people, even if
trade-offs are needed.
• Virtue-based Ethics: Focuses on choosing ethical decision-makers who uphold
honesty, compassion and integrity in AI governance. It ensures AI behaviour is
guided by moral values and not just rules.
BIOETHICS: THE GUIDING PRINCIPLES FOR LIFE AND TECHNOLOGY
 Ethics is the framework that helps us find answers to questions regarding right
and wrong, fairness, justice, responsibility and care in our personal and
collective lives. It acts as a compass to guide human behaviour in ways that
uphold the dignity and well-being of individuals and communities.
 As new challenges and opportunities emerge with rapidly changing
technological and scientific advances that impact human lives, the importance
of ethical thinking is becoming increasingly significant.
Definition of Bioethics
 Bioethics is the study of ethical issues and principles that arise in biology,
medicine and healthcare. This domain of ethics examines how we should act
when dealing with complex questions about life, health and human condition.
As a domain, bioethics is guided by four key principles: Autonomy, Beneficence,
Non-Maleficence and Justice.
 You might be wondering ‘How does the world of biology, medicine, life and
death connect with the abstract world of AI?’ The answer lies in recognizing the
fact that both AI and bioethics involve and impact real human beings and
ethical decision-making. AI is becoming increasingly embedded in healthcare
today and is impacting the way we define life and existence. It is important for
us to carefully understand where bioethics and AI ethics meet.
The Hippocratic Oath
 Bioethical principles aren’t just theoretical ideas—they have a deeprooted
significance in human history, experiences and values across cultures. For
example, consider the ancient Hippocratic Oath, written in the 5th century BCE, in
which physicians pledged to ‘do no harm’ (non-maleficence). This principle
remains central to medical ethics even thousands of years later. Similarly, many
cultures emphasize the importance of respecting individual autonomy. Modern
healthcare reflects this value when families are included in important health
decisions, ensuring their voices are heard and respected.
 By integrating such age-old principles with AI ethics, we can ensure that new
technologies serve humanity in ways that are responsible, compassionate and
fair.
Principles of Bioethics
 Respect for Persons/Autonomy: This principle recognizes that each person has
inherent value, dignity and is capable of making their own decisions. As
doctors, it is not enough to simply treat someone—you must also honour their
choices, allowing them to be active participants in the decision-making process.
In the context of medicine, autonomy demands that doctors fully inform
patients about proposed procedures, obtain their consent and respect their
refusal.
Principles of Bioethics
 Beneficence (Doing good): This principle is a call for action and a moral
imperative to act in the best interests of others, seeking ways to help them.
Medical interventions, treatments and research should be driven by a
desire to bring maximum benefit and provide improved care to those seeking
help.
 Developing vaccines for diseases like polio, smallpox or COVID-19 are some
examples of the principle of beneficence, with the sole aim of improving the
well-being of millions. These treatments were successful because the intention
to help others reigned supreme.
Principles of Bioethics
 Non-Maleficence (Avoiding harm): This bioethics principle is the commitment
to ‘do no harm’. Doctors, researchers and healthcare providers must be
cautious about potential risks, actively avoiding unnecessary or unjustifiable
harm to their patients.
Principles of Bioethics
 Justice (Fairness): It is the ethical principle that reminds us to treat everyone
fairly, irrespective of social, economic or other differences. Resources should be
distributed equitably and access to healthcare should be guaranteed for all.
This principle requires that healthcare must be a right and not a privilege of
every human.
 For example, in the early days of kidney dialysis and transplants, only the rich
patients got access to treatment. We now recognize the fact that fairness
requires that resources like dialysis or organ transplants be allocated on the
basis of medical needs and not on social or economic standing. Most countries
have added the principles of justice and fairness to their medical guidelines.
Bioethics and AI Ethics
 While ethical guidelines are shaping life science, they are equally important for
AI as it is gradually becoming an important part of our lives and healthcare.
Recent advances in AI are merging biology and technology, making it essential
to bring bioethics into AI ethics. Since AI can influence medical decisions, it is
essential that ethical principles of bioethics guide its development and use.
 The adoption of AI in healthcare introduces challenges that intersect both
technological and ethical considerations. While on the one hand, AI improves
diagnosis, treatment and personalized care, on the other, it raises concerns
about data privacy, algorithmic bias and equitable care for all patients.
• How do we ensure AI does not harm vulnerable populations?
• How much control over a patient’s care should be given to a machine?
• How do we protect human autonomy in this new era?
Bioethics and AI Ethics
 While AI in research speeds up drug discovery and medical advancements, it
also raises concerns about transparency and trust. When AI analyzes medical
images or suggests treatments, how do we validate its findings and ensure they
are reliable? Responsible use of AI is essential for faster and ethical scientific
breakthroughs.
 The use of AI in personal well-being tools, like health monitoring or mental
health support, poses questions about over-reliance on machines. Can AI ever
replace human empathy and compassion in patient care? How is sensitive data
managed and what happens if a breach occurs?
 Let us understand the joint application of bioethics and AI ethics with a
hypothetical case study.
Case Study - SMART MEDICINE DISPENSER
AND THE VILLAGE DOCTOR
 Consider Asha Gram, a rural village in India. Like many other villages, Asha
Gram faces challenges in healthcare access. There is one primary health centre
run by Dr Sharma, a dedicated doctor who works long hours in the service of
the people. Remote parts of the village are laden with challenges as delivering
medicines on time becomes difficult.
 HealTech, a tech company, has developed a new ‘Smart Medicine Dispenser’.
This is a small, AI-powered device that is designed to automatically dispense
the right medicine and dosage to patients, based on the doctor’s prescription
and the patient’s unique identification (through a fingerprint scan or Aadhaar
card).
 It is equipped with a screen that shows simple instructions while recording
details of each dispensing. This could be particularly helpful in rural areas with
shortage of trained medical staff.
Case Study - SMART MEDICINE DISPENSER
AND THE VILLAGE DOCTOR
 HealTech proposes a pilot program for Asha Gram—they will install multiple
smart dispensers at community centres and train local volunteers to assist
people in using them. Dr Sharma will initially prescribe medicines as usual.
However, eventually, the AI dispenser could also give suggestions based on the
data it collects, such as a patient’s prior health records and symptom
descriptions (entered by the local health volunteer or the patient themselves).
This can also help to track usage of medicines and provide analytics to public
health workers to identify outbreaks or gaps in health service delivery.
Case Study - SMART MEDICINE DISPENSER
AND THE VILLAGE DOCTOR
 Answer the following questions:
 Key Issues
 Bioethical Considerations
 AI Ethics Considerations
Case Study - SMART MEDICINE DISPENSER
AND THE VILLAGE DOCTOR
 Answer the following questions:
 Key Issues - Limited Access, New Technology, Patient Data, AI Decision-Making, Equity of
Access, Data Security
 Bioethical Considerations – Autonomy, Beneficence, Non-Maleficence, Justice
 AI Ethics Considerations - Data Privacy, Transparency, Bias, Accountability, Solutions
NIRAV BHINGRADIYA

You might also like