Ethics in Generative AI
LessonDownloads
In this section, we covered the ethical challenges of Generative AI, including how chatbots can
generate harmful content and how image generators can create biased images.
We also introduced risks and harms as two seminal ethical AI concepts to figure out when to
use an ethical AI approach.
Question 1 of 2
AI systems, particularly high-risk ones, must comply with some regulatory requirements
before being released. What do these requirements include?
[Select all that apply.]
Being 100% free of risk and bias
proper documentation
Completing an AI risk assessment
validating the AI model(s) thoroughly
Question 2 of 2
Sometimes, AI models can generate wrong outputs with very high confidence. This makes
them particularly risky when they answer questions that are out of their scope.
For example, ChatGPT and similar models are intended for writing and summarizing
content and similar tasks. It would be risky, and out-of-scope, for a lawyer to use ChatGPT
to generate citations and use them directly in a case.
Which of the following applications is out-of-scope for a chatbot like ChatGPT and would
require a guardrail?
Note: This question is about everyday (off-the-shelf) chatbots, like ChatGPT, and not
specialized, custom chatbots.
Writing a poem
Giving trustworthy medical advice
Giving advice on a vacation idea
Resources
Misuse of chatbots:
Sensitive data being leaked: "Samsung bans use of A.I. like ChatGPT for
employees after misuse of the chatbot"(opens in a new tab)
The risk of AI hallucinations when overly relied upon: "Lawyer Used ChatGPT
In Court—And Cited Fake Cases. A Judge Is Considering Sanctions"(opens
in a new tab)
Regulatory compliance for Generative AI, coming under the ethical AI domain:
Deloitte's 2024 article "Walking the tightrope: As generative AI meets EU
regulation, pragmatism is(opens in a new tab)".
The article "ISO - AI management systems: What businesses need to
know"(opens in a new tab)