Navigating the
EU AI Act
E-BOOK | APRIL 2024
Table of Contents
A horizontal approach: Standing apart on the global stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 03
AI frameworks: A global perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 07
Weighing AI’s pros and cons in business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 08
Developing AI governance: The way forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 09
Final thoughts and tips on AI governance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
DISCLAIMER:
No part of this document may be reproduced in any form
without the written permission of the copyright owner.
The contents of this document are subject to revision without
notice due to continued progress in methodology, design, and
manufacturing. OneTrust LLC shall have no liability for any error
or damage of any kind resulting from the use of this document.
OneTrust products, content and materials are for informational
purposes only and not for the purpose of providing legal advice.
You should contact your attorney to obtain advice with respect
to any particular issue. OneTrust materials do not guarantee
compliance with applicable laws and regulations.
Copyright © 2023 OneTrust LLC. All rights reserved.
Proprietary & Confidential. EU AI ACT | 2
E-BOOK
A horizontal approach: Standing apart on the global stage
In crafting its approach to artificial intelligence (AI) implications for society. a safety component of a product or are covered by
legislation, the European Union (EU) has opted other laws in the EU have 36 months to comply with
At the trilogue meeting in December 2023, France,
for a horizontal legislative framework. The EU’s AI the EU AI Act.
Germany, and Italy raised concerns about the
legal framework embraces an industry-agnostic
limitations placed on powerful AI models, and wanted AI: Breaking down the concept
perspective and is meticulously designed with nearly
to take a lighter regulatory regime for models like
a hundred articles. Originally, the Act defined machine learning, the basis
OpenAI’s GPT-4 .
of AI systems, as “including supervised, unsupervised
Here, we’ll look to provide a window into the EU AI
After ongoing discussion, the compromise reached and reinforcement learning, using a wide variety of
Act. This piece of legislation is not just the first of its
by the EU commission was to take a tiered approach, methods including deep learning.” The text includes
kind—but also a benchmark for global AI regulation,
with horizontal transparency rules for all models and an updated definition, which defines AI systems as
developed to help create a precedent in the rapidly
additional obligations for compelling models with “machine-based systems designed to operate with
evolving AI landscape.
systemic risk. varying levels of autonomy and that may exhibit
Guarding values, fueling innovation adaptiveness after deployment and that, for explicit
Where the Act stands now
or implicit objectives, infers, from the input it receives,
The EU AI Act is carefully balanced. It’s not just about
On February 2, 2024, the Committee of Permanent how to generate outputs such as predictions, content,
throwing a safety net around society, economy,
Representatives voted to endorse the political recommendations, or decisions that can influence
fundamental rights, and the bedrock values of Europe
agreement reached in December of 2023. On March physical or virtual environments.”
that might be at risk due to AI systems; it’s also a
13, Parliament voted to endorse the Act, with 523
nod to the power and potential of AI innovation, with The complexity of AI systems is a sliding scale,
votes in favor, 46 against, and 49 abstentions.
built-in safeguards designed to promote and protect with more intricate systems requiring substantial
inventive AI strides. The AI Act will enter into force 20 days after its computing power and input data. The output from
publication in the EU’s Office Journal. The provisions these systems can be simple or mightily complex,
Crafting the EU AI Act has been anything but a walk
on prohibited systems will apply after 6 months, and varying with the sophistication of the AI in play.
in the park, with the definition of AI being one of the
obligations for providers of general-purpose AI will
contentious corners. Since its inception proposal in
apply after 12 months. Most other requirements will
April 2021, the Act has been a living document, seeing
apply after two years.
numerous iterations, each amendment reflecting
the fluid discourse around AI technology and its High risk systems that are intended to be used as EU AI ACT | 3
E-BOOK
A horizontal approach: Standing apart on the global stage
This broad definition covers a range of technologies,
from your everyday chatbots to highly sophisticated
Unacceptable risk
generative AI models. But it’s important to note that
Prohibited Social scoring, facial recognition, dark-pattern AI, manipulation
not every AI system falling under the Act’s broad
definition will be regulated. The Act plays it smart with Art. 5
a risk-based approach, bringing under its regulatory
High risk
umbrella only those systems associated with specific Conformity assesment Education, employment, justice, immigration, law
risk levels.
Art. 6 & ss.
AI regulation: Calibrated to risk
Limited risk
Here’s where it gets interesting. The EU AI Act has Transparency Chat bots, deep fakes, emotion recognition systems
different baskets for AI systems. Some are seen as
Art. 52
posing an unacceptable risk to European values,
leading to their prohibition. High-risk systems, while Minimal risk
Code of conduct Spam filters, video games
not banned, have to dance to a tighter regulatory tune.
It’s vital to remember that these risk categories aren’t
Art. 69
static; the Act is still in a draft stage, and as more
changes come, these risk categories will likely be fine-
tuned as well.
These are the levels of “permissible risk” that are High Risk — Credit scoring systems, automated
EU AI Act risk levels
allowed by organizations, however, “unacceptable insurance claims
The EU AI Act defines multiple levels of permissible risk” is a risk level which is not allowed by
For processes that fall into this bucket, companies
risk: high risk, limited risk, and minimal risk. organizations at which point companies need to
need to conduct a conformity assessment and
change their models accordingly.
Unacceptable Risk — Social scoring systems, real-
time remote biometric verification
EU AI ACT | 4
E-BOOK
A horizontal approach: Standing apart on the global stage
register it with an EU database before the model is
available to the public.
Apart from this, these high-risk processes require
detailed logs and human oversight as well.
Limited Risk — Chat bots, personalization
For limited risk processes, companies need to
ensure that they’re being completely transparent
with their customers about what AI is being used for
and the data involved.
Minimal Risk — For any processes that companies
use that fall into the “minimal risk” bucket, the draft
EU AI Act encourages providers to have a code
of conduct in place that ensures AI is being used “relevant, representative, free of errors and ensures that AI systems comply with EU standards,
ethically. complete” particularly if there are significant modifications
or changes in intended use. The main responsible
Conformity assessments • Detailed technical documentation
party for CA is the “provider”—the entity putting
Of these risk levels, high-risk systems will pose • Record-keeping in the form of automatic the system on the market. However, under certain
the highest compliance burden on organizations, recording of events circumstances, the responsibility can shift to the
as they’ll have to continue to meet obligations for manufacturer, distributor, or importer, especially
• Transparency and the provision of information
conformity assessments. Conformity assessments when they modify the system or its purpose.
to users
(CA) require companies to ensure that their “high-
risk” systems meet the following: • Human oversight
• The quality of data sets used to train, validate This assessment is mandatory before a high-risk AI
and test the AI systems; the data sets must be system is made available or used in the EU market. It
EU AI ACT | 5
E-BOOK
A horizontal approach: Standing apart on the global stage
Who performs a CA? post-market to ensure they remain compliant with Charting the course towards regulated AI
the evolving draft EU AI Act. In cases where a notified
The CA can be done internally or by an external The EU AI Act is a bold statement by the EU,
body is involved, they will conduct regular audits to
“notified body.” Internal CAs are common as meticulously balancing the act of fostering AI
verify adherence to the quality management system.
providers are expected to have the necessary innovation while ensuring that the core values and
Robustness, accuracy and cybersecurity
expertise. Notified bodies come into play particularly rights of society are not compromised. With the
when an AI system is used for sensitive applications Engaging all players in the AI game Act inching closer to its final stages of approval, it’s
like real-time biometric identification and does not crucial for everyone in the AI space to keep an eye
The EU AI Act is not just handing out responsibilities
adhere to pre-defined standards. on its development.
to AI providers; it’s casting its net wider to include
During an internal CA, the provider checks various actors in the AI lifecycle, from users to Whether you’re a provider, user, or someone
compliance with quality management standards, deployers. And its reach is not just limited to the involved in the deployment of AI, preparing for a
assesses technical documentation, and ensures EU; it has global ambitions, affecting entities even future where AI is not just a technological marvel
the AI system’s design and monitoring are outside the EU, thus having implications that are but also a subject of defined legal boundaries and
consistent with requirements. Success results in worldwide. responsibilities is imperative. This introduction offers
an EU declaration of conformity and a CE marking, a glimpse into the EU AI Act’s journey and potential
Fines: A significant deterrent
signaling compliance, which must be kept for impact, setting the stage for the deeper analysis that
ten years and provided to national authorities if With the EU Parliament’s recent adjustments to the unfolds in the subsequent sections. So, buckle up
requested. EU AI Act, the fines for non-compliance have seen a and let’s dive deeper into understanding the nuances
hike, now standing at a maximum of 35 million euros and implications of the EU AI Act together.
For third-party CAs, notified bodies review the
or up to 7% of global turnover. For context, these
system and its documentation. If compliant, they
fines are 50% greater than that of the GDPR, which
issue a certificate; otherwise, they require the
has maximum fines of $20M or 4% of global turnover,
provider to take corrective action.
underlining the EU’s commitment to ensuring strict
How often should you perform a CA? adherence to the EU AI Act.
Conformity assessment isn’t a one-off process;
providers must continually monitor their AI systems
EU AI ACT | 6
E-BOOK
AI frameworks: A global perspective
A landscape in flux: The global heat map of AI
frameworks
Proposed AI Act
The global AI framework landscape underscores
the imperative need for more cohesive international
rules and standards pertaining to AI. The
White House Blueprint
proliferation of AI frameworks is undeniable, calling for AI Bill of Rights
for enhanced international collaboration to at
least align on crucial aspects, such as arriving at a Draft Measures for Generative AI
Government AI
universally accepted definition of AI. NIST AI RMF White Paper
Within the tapestry of the European Union’s legal
OECD AI principles
framework, the EU AI Act is a significant thread,
Draft Regulatory Framework on AI
weaving its way towards completion. Concurrently, EU AI Act
there’s a mosaic of initiatives at the member-state
level, with authoritative bodies across various DRAFT
ISO 23894 on AI and
Risk Management
nations rolling out non-binding guidelines, toolkits, ADOPTED
and resources aimed at providing direction for the
effective use and deployment of AI.
Efficient future processes through AI improve various aspects of business and everyday Recognizing its transformative power while
life. preparing for the challenges it brings to the table
AI promises quicker, more efficient, and accurate
is essential for individuals and organizations alike
processes in various sectors. For example, in But engaging with AI is a nuanced dance, a careful
as they navigate the dynamic landscape of artificial
insurance, AI has streamlined the assessment balancing act between leveraging its unparalleled
intelligence in the modern age.
process for car accidents, optimizing a process that potential and navigating the associated risks. With its
was once manual and lengthy. This example serves transformative and disruptive capabilities, AI invites
as a testament to AI’s potential to significantly cautious and informed engagement.
EU AI ACT | 7
E-BOOK
Weighing AI’s pros and cons in business
Risks: Transparency, accuracy, and bias
Despite its myriad advantages, AI isn’t without
substantial challenges and risks. For starters, some
AI systems, which may be perceived as “black boxes,”
have been the subject of intense scrutiny and debate
over transparency issues. This concern is particularly
salient with larger AI systems, such as extensive
language models, where there’s a lack of clarity on
the training data employed. This raises significant
copyright and privacy concerns, which need to be
addressed head-on.
Furthermore, the struggle with ensuring the
accuracy of AI systems persists, with several
instances of erroneous AI responses and predictions
documented. Notably, bias that may arise in AI
systems—stemming from the prejudiced data they
may be trained on—poses a risk of discrimination,
requiring vigilant monitoring and rectification efforts
from stakeholders involved.
AI as solution: Turning risks into opportunities
within datasets. Once these biases are discerned, strategic steps can be taken to rectify them, ensuring that
Interestingly, AI isn’t just a challenge; it is also a
AI can be harnessed optimally to maximize its benefits while minimizing associated risks.
potential solution to these conundrums. For instance,
AI can be leveraged to identify and mitigate biases
EU AI ACT | 8
E-BOOK
Developing AI governance: The way forward
Laying the foundations for AI governance AI ethics policy: A critical element
1. Transparency: Efforts should be directed
With the dynamic and complex AI landscape The establishment of AI Ethics Policies, informed
towards demystifying AI applications,
unfolding rapidly, there is an urgent need for legal by ethical impact assessments, is essential in
making their operations and decisions
and privacy professionals to lay the groundwork for navigating challenges and making informed, ethical
understandable and explainable to users
robust AI governance and compliance programs. A decisions regarding AI use. For example, instead
and stakeholders.
wealth of existing guidance provides a preliminary of outright blocking certain AI applications, ethical
roadmap for the essential requirements of such 2. Privacy Adherence: AI applications should impact assessments can guide organizations in
programs, with senior management’s endorsement respect and protect users’ privacy, handling implementing nuanced, responsible use policies,
being a pivotal first step in this endeavor. personal data judiciously and in compliance especially for sensitive data. Ethical considerations
with relevant privacy laws and regulations. should inform every step of AI application, from
Engaging C-suite executives and ensuring they
inception and development to deployment and
comprehend the magnitude and intricacies of 3. Human Control: Especially in high-risk areas,
there should be mechanisms for human monitoring.
AI’s influence is crucial for fostering a culture of
oversight and control over AI applications,
AI responsibility throughout the organization. This Inclusive AI governance: A size-agnostic
ensuring they align with human values and
initiative transcends mere compliance, extending to imperative
expectations.
building trust in AI applications – a cornerstone for
IImportantly, AI governance is not an exclusive
successful business operations. 4. Fair Application: Strategies for detecting
domain of large corporations with extensive
and mitigating biases in AI applications
Practical steps towards an AI governance resources. With AI use cases proliferating across
should be implemented, promoting fairness
framework various sectors, companies of all sizes will inevitably
and avoiding discrimination.
engage with AI, necessitating AI governance
On the material front, organizations can use practical
5. Accountability: There should be frameworks tailored to their specific needs and
guidelines for ethical AI use. These guidelines are
comprehensive documentation and capacities.
aligned with the AI principles from the Organization
recording of AI operations, allowing for
for Economic Cooperation and Development
scrutiny, accountability, and necessary
(OECD):
corrections.
EU AI ACT | 9
E-BOOK
Developing AI governance: The way forward
A few universal principles apply regardless of the 3. Fairness and Ethics Alignment: AI outputs guide organizations in harnessing AI’s benefits while
company’s size. First, securing executive buy-in and should align with fairness and ethical standards, mitigating its risks, ultimately fostering trust and
adopting a multidisciplinary approach is imperative reflecting an organization’s culture and values. success in the AI-driven future.
for successful AI governance implementation.
4. Data Management: Implementing robust data Value-driven AI governance
Second, organizations should commence with high- management processes, tracking modifications
As organizations delve deeper into the realm of
level principles as a starting point, even if they are to datasets and mapping data sources, is key for
AI, developing and implementing AI governance
small or merely purchasing ready-made AI models. reliable AI systems.
programs aligned with their values is paramount.
Training and upskilling employees across various
5. Transparency Enhancement: Ensuring that AI These governance frameworks should not only
functions, including procurement and technology,
decision-making processes are transparent and ensure compliance with legal standards but also
is also vital to understand and mitigate the risks
understandable is necessary for building trust reflect the ethical commitments and values of the
associated with AI tools and applications.
and compliance. organizations.
Embedding core governance principles
6. Privacy and Cybersecurity: Addressing legal Whether it’s about making tough trade-offs between
Six core governance principles need to be data processing requirements, conducting transparency and security or deciding on the
embedded into AI governance programs: privacy impact assessments, and mitigating ethical use of data, a values-driven approach to AI
AI-specific cyber risks are imperative for secure governance provides a reliable compass guiding
1. Governance and Accountability: Establishing a
and compliant AI applications. organizations through the intricate landscape of AI
structure for accountability, possibly through AI
applications and ethics.
oversight committees or ethics review boards, Given the pace at which AI is evolving and
is essential. Governance should be enforced its profound implications, organizations must
throughout AI’s lifecycle, from inception to proactively develop and implement AI governance
operation. programs. By adopting a set of core governance
principles and practices, organizations can
2. Human Oversight: Adopting a human-centric
navigate the AI landscape responsibly, ethically,
approach, with trained human reviewers
and effectively. These principles, informed by
at various stages, is crucial for ethical AI
ethical considerations, legal compliance, and a
application.
commitment to transparency and accountability, will EU AI ACT | 10
E-BOOK
Final thoughts and tips on AI governance
AI, GDPR, and data privacy Keep calm and AI: Embrace technological
developments with a sense of calm and curiosity. 3. Assemble a diverse and skilled team,
When considering the interaction between AI, the
Engaging with the fast-paced and continually encompassing legal, compliance, data
draft of the EU AI Act, and GDPR, it’s crucial to
evolving field of AI requires a willingness to learn science, HR, information security, and
acknowledge existing guidance on utilizing AI in line
and adapt, acknowledging that understanding and external experts
with GDPR. Noteworthy resources include the toolkit
addressing the risks and potentials of AI is a journey
provided by the UK’s Information Commissioner’s 4. Prioritize, set realistic and achievable goals,
rather than a destination.
Office (ICO) and the comprehensive guidance and and consider adopting a phased approach
self-assessment guide offered by France’s CNIL. Evolution of Professional Roles: With the continuous to AI governance.
These tools offer valuable controls and checklists, changes in technology and data processing, the
5. Stay abreast of AI developments, actively
assisting organizations in ensuring compliance of roles of data protection officers are evolving,
engage with industry peers, and participate
their AI use with GDPR requirements. potentially transitioning towards “data trust officers”.
in AI governance initiatives to foster a
It’s imperative for professionals in the field to be
A starting point for aligning data usage within AI collaborative and informed community.
open to assuming new roles and responsibilities as
frameworks with GDPR principles is to conduct
the technology and regulatory landscape transforms.
diligent Data Protection Impact Assessments
With the evolving landscape of AI, organizations
(DPIAs) to ensure that all these processes remain
must proactively engage with AI governance.
compliant.
To give your organization a 5-step plan to get A collaborative, multi-stakeholder approach is
AI governance start point: Privacy professionals started: necessary to address the complex challenges and
are well-positioned to serve as orchestrators, opportunities presented by AI.
1. Engage with AI governance programs
bringing together various functions and skillsets
immediately; proactive engagement is To learn more about how AI Governance can help
within organizations to address AI governance
crucial. your organization, request a demo today.
comprehensively. This collaborative approach not
only ensures compliance but also functions as a 2. Secure management buy-in since AI
Request demo
business enabler, fostering a proactive and informed governance requires a multi-stakeholder,
approach to emerging challenges and opportunities enterprise-wide approach.
in the AI landscape.
EU AI ACT | 11
REQUEST A DEMO TODAY AT ONETRUST.COM
As society redefines risk and opportunity, OneTrust empowers tomorrow’s leaders to succeed through trust and impact with the
Trust Intelligence Platform. The market-defining Trust Intelligence Platform from OneTrust connects privacy, GRC, ethics, and ESG
teams, data, and processes, so all companies can collaborate seamlessly and put trust at the center of their operations and culture
by unlocking their value and potential to thrive by doing what’s good for people and the planet.
Copyright ® 2024 OneTrust LLC. Proprietary & Confidential.