0% found this document useful (0 votes)
8 views45 pages

File 152101

The thesis examines the ethical implications of AI-powered recruitment tools, highlighting concerns about bias and discrimination in hiring practices. It discusses the potential for AI to both reduce and amplify human biases, particularly in the context of sensitive data processing under regulations like GDPR. The document emphasizes the need for ethical guidelines and accountability in the development and use of these AI technologies in recruitment.

Uploaded by

S. Ranjitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views45 pages

File 152101

The thesis examines the ethical implications of AI-powered recruitment tools, highlighting concerns about bias and discrimination in hiring practices. It discusses the potential for AI to both reduce and amplify human biases, particularly in the context of sensitive data processing under regulations like GDPR. The document emphasizes the need for ethical guidelines and accountability in the development and use of these AI technologies in recruitment.

Uploaded by

S. Ranjitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

LLM: Law & Technology 2019-2020

Master’s Thesis

Artificial Intelligence in Recruitment:


the ethical implications of AI-powered recruitment tools

Author: Ihsan Türkeli


SNR: U1269969
ANR: 959355
Supervisor 1: Prof. dr. E.L.O. Keymolen
Supervisor 2: Prof. dr. Nadezhda Purtova
Word Count: 12.638
“Every human on the planet is biased. We all carry an
incurable virus called our unconscious bias. I firmly believe
that AI is the only way we can reduce human bias with
recruiting. Unfortunately, naively using AI for recruiting is
a guaranteed recipe for disaster. Racist/Sexist AI isn’t a
risk, it is a guarantee if used by inexperienced teams. AI
will naturally learn our bias and amplify it”

-Ben Taylor.

2
Table of Contents
1. Introduction……………………………………………………………………………….4
1.1. The Focus………………………………………………………………………………………………4
1.2. Problem Statement…………………………………………………………………………………5
1.3. Literature Review and Gap in Literature…………………………………………………..9
1.4. Main Research Question and Sub-questions…………………………………………….11
1.5. Methodology and Methods…………………………………………………………………….12
1.6. Roadmap of the argument……………………………………………………………………..13

2. AI-powered recruitment and the notions of fairness and equality in


recruiting………………………………………………………………………………….15
2.1. Introduction…………………………………………………………………………………………15
2.2. AI-Tools: what are they?...................................................................................15
2.3. The ethical notions of fairness and equality in light of the area of
recruitment……………………….………………………………………………………………….18

3. Tensions when using AI-powered tools in the field of recruitment….22


3.1. The HireVue example: what can go wrong?....................................................22
3.2. The Complaint by EPIC – undermining principles and bias by AI……………..23
3.3. “Unfair” decisions and remarks regarding supplier accountability…………….25

4. GDPR & AI Principles – will adherence and compliance to them lead


to enhanced fairness of AI Tools?......................................................27
4.1. Introduction…………………………………………………………………………………………27
4.2. The General Data Protection Regulation: sensitive data, automated decision
making, data subject rights…………………………………………………………………….27
4.2.1. Automated Decision Making clashing with AI Tools………………………..27
4.2.2. (Explicit) Consent – a brief discussion…………………………………………..29
4.2.3. Sensitive/Special and Biometric Data……………………………………………30
4.3. OECD Principles on AI and the Universal Guidelines for AI……………………..32
4.3.1. OECD Principles on AI……………………………….…………………………………32
4.3.2. The Universal Guidelines for AI…………………………………………………….34
4.4. The Common Ground……………………………………………………………………………36

5. Conclusion………………………………………………………………………………..39

6. Bibliography……………………………………………………………………………..42

3
Artificial Intelligence in Recruitment: the ethical implications of AI-
powered recruitment tools

(Note: throughout this thesis, the term “AI-Tools” and “AI-Powered Tools” will mean
“AI-powered pre-employment assessment tools”)

Chapter 1. Introduction

1.1. The Focus

In the last years, the usage of AI-powered tools has exploded in the talent-
assessment/recruitment process. Traditional hiring methods such as CV-screenings,
job interviews, etc. make way for a new generation of assessment tools. These tools
offer, for instance, gamified assessments, AI powered video interview modules,
personality tests or cognitive ability tests.1 The tool’s algorithms then use data it
gathered to assess candidates’ fit to the job they apply for.2 In other words, AI-
powered tools play a role in deciding whether the candidate continues to the next stage
(think of: machine learning to make predictions, natural-language processing to
understand human language, and computer vision or image recognition to identify
human images). While not all tools provide the same service (some might be more
sophisticated than others), the general message that most producers of these tools
wish to bring forward is that implementing this technology will lead to better and
faster high-volume hiring.3

This increasing usage of AI-powered tools reflect the current worldwide ‘data-boom’
in many different sectors.4 The occurrence of this ‘data-boom’ has been resulting in

1 Examples of such AI-powered tools include Harver https://harver.com/software/;


The Predictive Index https://www.predictiveindex.com/assessments/; eSkill
https://www.eskill.com/our-product/pre-employment-assessments/; Modern Hire
https://modernhire.com/platform/.
2 Examples of such data are personal data of the candidate, including the results
from modules of the AI-powered tool (i.e. the results from personality tests, video
interviews, language tests, etc.)
3 Exploring the examples above will showcase this.
4 Eric Brynjolfsson and Andrew McAfee, ‘The Big Boom is the Innovation Story of
our Time’ (The Atlantic Business, 21 November 2011)

4
society being more dependent on (big) data.5 Consequently, data processing in big data
environments occur much more frequently. The recruitment sector has been around
for decades, and it is a particularly developed sector with many established (soft) laws
and norms. While the erroneous viewpoint that AI is taking over the world is often
heard6, recent trends in the recruitment sector do hint to AI penetrating into this
sector quite extensively. Indeed, AI technology is changing the HR/recruitment sector
for, arguably, the better: efficiency occurs as recruiters no longer have to scan through
dozens of CVs and better matching results from the sophisticated AI technology in
recruitment tools.7

As a consequence of the increasing usage of AI in recruitment, many Software-as-a-


Service (SaaS) tools exist – a vast market has already been created. The degree of
sophistication of these SaaS tools vary quite a bit: while some offer modern technology
such as face recognition, others merely offer a platform to ask candidates tailor-made
questions. These tools create profiles of the candidates taking part of the recruitment
process. Based on this profile, a decision is being made automatically by the data-
driven AI. All in all, the technology has arrived in the HR sector, altering the scene of
recruitment.

1.2. Problem Statement

While the increase of efficiency due to AI-powered recruitment tools are generally not
disputed, major concerns have been raised regarding the ethical and legal
implications. For instance, in the field of video-interviews, AI-powered tools could
result in the processing of sensitive information, resulting in perhaps
gender/racial/sexual orientation-related bias. Add to it the possibility that machine

https://theatlantic.com/business/archive/2011/11/the-big-data-boom-is-the-innova
tion-story-of-our-time/248215/
5 Ibid.
6 Eleni Vasilaki, ‘Worried about AI taking over the World? You may be making some
rather unscientific assumptions’ (The Conversation, 24 September 2018)
http://theconversation.com/worried-about-ai-taking-over-the-world-you-may-be-
making-some-rather-unscientific-assumptions-103561 (accessed May 20, 2020)
7 Venturi Group, ‘How AI is Changing Recruitment’ (Venturi Group, not dated),
https://www.venturi-group.com/how-ai-is-changing-recruitment/ (accessed May
20, 2020)

5
learning algorithms could potentially be trained with data from merely/mainly white
people, ethical tensions can occur. For instance, to what extent is it ethical to judge
people with various backgrounds with merely such data? Or, to what extent is it ethical
to implement such AI-powered tools that, due to its technological sophistication, can
process sensitive information which can lead to the abovementioned bias? And here
lie several problems: this area of AI-powered recruitment is subject to legal
uncertainty, and it is not always clear. Overall, the subject is also very vague and
fragmented – each company offering a SaaS tool for recruitment can implement
completely different AI-technology than the other. Currently, the GDPR is the main
legislation on European level that is invoked when such AI tools process personal data.

Quite concretely, Amazon’s AI-powered recruitment tool apparently showed bias


against women, which severely impacts certain candidates’ lives in a discriminatory
manner.8 This occurred due to the AI tool being trained – mainly – with data of
Caucasian men. The tool was developed to analyze a large amount of resumes, to then
find the perfect candidate suitable for the open vacancy. This analyzation occurs by
way of AI technology and data of model employees. In reality, Amazon’s recruitment
tool penalized applicants whose resumes contained the words “women”, for instance
as in “women’s chess club”, or applicants who graduated from an all-women school.9

What is more, sophisticated facial recognition AI technologies are argued to be able to


determine whether an individual is straight or gay. 10 While the latter example would
not result in bias/discrimination per say, the risk will always remain that an employer
an AI SaaS tool in recruiting could make decisions based off this data. Not to mention
that that employer might have already trained the tool to automatically reject such

8 Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against
women’ (Reuters, 10 October 2018) https://www.reuters.com/article/us-amazon-
com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-
bias-against-women-idUSKCN1MK08G (accessed May 21, 2020)
9 Julien Lauret, ‘Amazon’s sexist AI recruiting tool: how did it go so wrong?’
(Becoming Human, 16 August 2019) https://becominghuman.ai/amazons-sexist-ai-
recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e (accessed May 21, 2020)
10 Sam Levin, ‘New Artificial Intelligence can tell whether you’re Gay or Straight from
a photograph’ (The Guardian, 8 September 2017)
https://www.theguardian.com/technology/2017/sep/07/new-artificial-intelligence-
can-tell-whether-youre-gay-or-straight-from-a-photograph (accessed March 22,
2020)

6
candidates. Situations such as these, combined with the increase in using AI in hiring,
raise major ethical concerns.

Another aspect combined with ethical concerns, is the (in)compatibility of such AI


recruitment tools with Data Protection Laws – in particular the GDPR in the European
Union, or the Federal Trade Commissions (FTC) in the United States. Data processed
through AI recruitment tools are often sensitive. Video modules can expose a
candidate’s ethnicity, religious beliefs, health status, etc. Another example is that
complex personality test modules might result in the processing of even more sensitive
data. While the GDPR offers legitimate grounds for the processing of (sensitive) data,
the reality is that these legitimate grounds do not sufficiently regard the complexity of
such AI-powered tools. Additionally, the GDPR is a newly-adopted EU legislation, and
much legal uncertainty exist, particularly in the field of AI/recruitment. Certain
guidelines do exist, but further specification is needed.

The potential collection of sensitive personal data is merely one of the few complex
points of AI recruitment tools in relation to the GDPR. Since some AI-powered tools
also make automated decisions with regards to the candidates, there might be
concerns regarding the legality of such endeavors. After all, automated decision-
making has a general prohibition under Article 22 GDPR. At first instance, one might
therefore believe that some AI-powered tools are incompatible with the GDPR.
Similarly, the conditions of fair and unbiased recruitment could be seen as
incompatible with these new technologies. For instance, discrimination is inherently
bad, and in the context of recruitment, the occurrence of discrimination and/or bias
will have multiple negative consequences for individuals. Effectively, the recruitment
process is the first step of an individual’s employment. Being employed is in turn
essential for the life of an individual, as it enables them to meet their (basic) needs.
This will also benefit the economy, and workplaces with all sorts of individuals will
strengthen the existence of an open, multicultural and inclusive society.11 Unfairness
and bias might occur in the recruitment process due to AI recruitment tools being
trained with certain data, and this can significantly impact certain individuals’ lives.

11Catherine Capozzi, ‘The Importance of Employment & Workplace in the Society’


(Bizfluent, 25 January 2019) https://bizfluent.com/info-8296076-importance-
employment-workplace-society.html (accessed June 1, 2020)

7
However, some nuance is needed. The suppliers of such AI recruitment tools often
provide their tools/software “as is” to their customer, perhaps to avoid any liabilities
from the beginning. If the inhouse recruiter of such a customer is determined to
provide for an unbiased recruitment process, the possibility exists to train the AI tool
accordingly. The problem lies with corporations, either willingly or unwillingly,
allowing unfair recruitment practices to occur, while perhaps also undermining the
GDPR. Algorithms are not morally biased by themselves – they contain no
consciousness, agency, autonomy or sense of morals.12 They merely do what their
users ask them to do. However, once the manufacturers are aware that bias occurs
through their recruitment tool, they should act on it. This example is better explained
by making an analogy to laptop manufacturers: the laptop is manufactured and
contains all necessary functions. Yet it is up to the user of the laptop to either use it for
good or bad. Similarly, the algorithm is developed by the manufacturer and their
customers can, with the help of data, shape the recruitment tool further. Again, the
manufacturer should nonetheless monitor the usage as much as legally possible.

An interesting, relevant and very recent example of an AI-powered tool is worth to be


mentioned. HireVue provides online video software to its clients to streamline
recruitment processes. Initially, the use case of this tool was simply to record answers
to interview questions and to have real-time online interviews. In a strategy to create
more value and to stay ahead of the competition, they added an AI layer to their tool
that, as they claim, can predict the quality of the applicant. 13 EPIC, a prominent rights
group in the US, filed a complaint with the FTC alleging that HireVue has committed
unfair and deceptive practices in violation of the FTC Act.14 In the complaint that was
filed by Epic, Epic argues that HireVue failed to comply with baseline standards for AI
decision-making in this tool. Said differently, HireVue is not screening the applicants
in a way that is fair, explainable or defensible. EPIC also said the company failed to

12 Ibid, n. 9
13 More information on HireVue can be found in their website:
https://www.hirevue.com/
14 Drew Harwell, ‘Rights group files federal complaint against HireVue, a hiring
company that uses artificial intelligence’ (The Seattle Times, 6 November 2019)
https://www.seattletimes.com/business/rights-group-files-federal-complaint-
against-ai-hiring-firm-hirevue-citing-unfair-and-deceptive-practices/ (accessed May
25, 2020)

8
comply with baseline standards for AI decision-making, such as the OECD AI
Principles and the Universal Guidelines for AI.

Yet, creating an AI powered recruitment tool does not always equal to unfair practices.
Harver, one of the many AI recruitment software suppliers, urges its customers to
diminish discrimination and hiring bias as much as possible. It has published a blog
post, mentioning 11 tips on how to reduce hiring bias, noting that AI tools should never
be blindly trusted.15 Another example is the thought that AI is, in fact, eliminating the
“significant unconscious bias against women, minorities and older workers.”16 Indeed,
unconscious human bias makes the hiring process unfair, and AI recruitment tools can
provide for the reduction of unfair practices.

Despite the abovementioned, more often than not, the necessary attention and care is
not being given by corporations to reduce unfairness in recruitment. AI recruitment
tools are specifically problematic in that context, this thesis will show, mainly due to
its training data and inherent problems such as the lack of transparency.

1.3. Literature Review and Gap in Literature

With regards to the topic of this thesis and the problem statement, several ‘groups’ of
literature can (roughly) be made. This section provides a compact literature review in
the context of the key issues of this thesis.

For starters, a few notions are mentioned in the problem statement, mainly: the notion
of fairness, equality and discrimination. Quite a lot of literature exists, explaining and
defining these notions, but Chouldechova’s paper 17 is especially relevant as it links
these notions with prediction instruments such as the recruitment tools that are the

15 A. Alexandra, ’11 Ways to Reduce Hiring Bias’ (Harver, 2019)


https://harver.com/blog/reduce-hiring-bias/ (accessed March 3, 2020)
16 Frida Polli, ‘Using AI to Eliminate Bias from Hiring’
https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring (accessed March
25, 2020)
17 Chouldechova, A. "Fair prediction with disparate impact: A study of bias in
recidivism prediction instruments." Big data 5.2 (2017)

9
subject of this thesis. In the same vein, Popham argues in his paper 18 that
discrimination occurs when a predictive tool suffers from assessment bias.
Throughout literature, it appears that such notions enjoy a common perception.

Obviously, the topic of AI is relevant for this thesis as well. The notion of AI is tackled
from a more legal perspective. EU and other authoritative institutions have published
varying literature on the topic of AI, such as white papers on how the “European
approach” on AI should be (including the potential regulatory framework) 19,
guidelines on AI20 and literature containing in some shape or form a definition of AI.

The combination of utilizing literature dealing with the more ethical aspects (such as
the aforementioned notions of fairness and equality), and literature exploring the legal
aspects of AI, is key for this thesis. The legal aspect is also represented by exploring
the GDPR and the literature on it (for instance, papers published by the Article 29
Working Party).

Literature explaining and defining fairness, equality, discrimination and AI is rich


(also in the context of recruitment). However, the question whether the GDPR and
existing/proposed AI guidelines can actually enhance – in their current state –
fairness and equality of AI powered recruitment tools, is not explored as extensively.
Legislators and authoritative bodies from around the world draft rules, regulations,
guidelines, whitepapers, etc. on the topic of AI, but why are such sources drafted the
way they are (as will be clear in this thesis)? In other words, what is the common
ground amongst the different guidelines and the GDPR, if any. And how/to what
extent do the documents explored in this thesis potentially help in undermining ethical
concerns when using AI powered recruitment tools? Such a legal – ethical connection
appears to be explored scarcely in existing literature.

18 W. James Popham, “Assessment Literacy for Educators in a Hurry”, ASCD (2018):


127
19 European Commission, “WHITE PAPER On Artificial Intelligence – a European

approach to excellence and trust” (European Commission, 19 February 2020)


https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-
intelligence-feb2020_en.pdf
20 Chapter 3 and 4 will provide extensive exploration of such guidelines.

10
1.4. Main Research Question and Sub-Questions

Main research question

To what extent are AI-powered pre-employment assessment


tools conflicting with ethical notions of fairness and equality in
recruitment, and how does compliance with the GDPR and AI
Principles enhance this fairness and equality?

Sub-question 1
What are the ethical notions of fairness and equality in light of the area of
recruitment and selection process?

Sub-question 2
What potential ethical tensions and implications occur when using AI-
powered tools in the field recruitment, in light of fairness and equality?

Sub-question 3
To what extent does compliance with the GDPR and OECD principles
result in the reduction of the mentioned ethical notions?

1.5. Methodology and Methods

This paper will mainly be an interdisciplinary research. Interdisciplinary legal


research incorporates insights from non-legal disciplines as well. It is about combining
disciplines, and the result is that it becomes about the law (i.e. the law in action). Other
notions that can play a role are, for example, ethics, socio-legal notions, economics,
etc. Ultimately, legal systems/the laws aim to regulate people’s behaviors. With
interdisciplinary research, one can truly provide insights as to whether that pursued
aim is actually achieved. It is not necessarily about looking at the law itself (comparing
it across jurisdictions, etc.), but looking at it from a broader perspective. Does a legal
norm contradict norms from non-legal disciplines? Is a legal norm effective in real life

11
(i.e. ‘law in action’)? Again, insights from non-legal disciplines are used to justify
whether a law and a phenomenon clash with each other – whether the law is effective,
whether other elements play a major role as well. 21

It is already quite clear that this thesis’s research question focuses on an ethical
element as well. The research question highlights a new technical advancement, and
whether the GDPR is effective in regulating that technology. To truly sketch whether
this is the case, a non-legal discipline (ethics) will be incorporated as well. This
relationship between law, ethics, and AI-powered recruitment tools will require
insights from more than just the black letter law (i.e. GDPR)

Regarding methods, analyzing qualitative/descriptive data will be key. Basically,


information from articles, papers, but also interpretations of the law (for example, the
Article 29 Working Party, which will give great insights about the GDPR provisions
and their implementation). The method of studying such data will provide the
necessary information regarding the relevant notions (e.g. ethics), and its
implications. Scholar opinions, doctrine, articles, etc. are all sources for this.

Exploring the AI-powered recruitment tools (for instance, the HireVue example) will
shed light to the way they work. Literature will further explore the notions of ethics,
and certain possible tensions. Insights from scholars and official bodies regarding the
GDPR will reveal its intended implementation. All in all, combining these will shed
light on to what extent AI-powered recruitment technologies clash with ethics and the
GDPR provisions. Again, this will be done in an interdisciplinary research: the
effectiveness of the law in its context will be measured with help of non-legal
disciplines such as ethics.

Limitations and perspective


In this thesis, the main legal source to be assessed is the GDPR, but also OECD
principles and FTC publications. There will be no discussions regarding national laws
as is, since the GDPR encompasses the entirety of data processing law of the EU, while
the OECD has a supranational reach as well. Moreover, the technology of this research

Wendy Schrama, ‘How to carry out interdisciplinary research’, Utrecht Law


21
Review, Volume 7 Issue 1 (2011)

12
is limited to AI-powered recruitment only. Discussions about recruitment (fair
recruitment, equal recruitment, etc.) will be kept to a minimum – only to be included
if there are overlapping elements with AI-powered recruitment. Moreover, ethical
norms such as fairness and equality will be discussed in light of this AI technology
only. The mere purpose it will serve is to highlight the extent of conflict of these ethical
notions with the AI technology. Regarding the data protection aspect, as mentioned
above, the GDPR will be looked at. More specifically, the processing of sensitive data
by AI-powered tools, the clashes with the notion purpose limitation, as well as the
automated-decision making provisions will be extensively discussed, as these are the
most relevant aspects with regards to AI-powered recruitment tools.

1.6. Roadmap of the argument

First, I will discuss the AI-powered recruitment tools and how they work by setting up
a common ground: almost all of these tools have certain modules (cognitive ability
tests, video interview modules, language tests, personality tests, gamified
assessments, etc.). There are certain benefits of using these tools, which will be
discussed as well.

Sub-question 1 will focus on what is expected from recruiters. The goal is to truly give
a meaning to notions such as fairness and equality in light of the area of recruitment.
Literature will guide this descriptive part of this paper.

To answer the second sub-question, a more critical approach will be used. The HireVue
example will perfectly link sub-question 1 and 2, since it will highlight exactly what
could go wrong in such AI powered recruitment tools. Furthermore, the Article 29
Working Party Guidelines will briefly be introduced, as these are relevant to justify
why Article 22 of the GDPR prohibits automated decision-making. Consequently, this
will highlight the potential tensions of automated-decision-making. Moreover, the
potential act of processing sensitive data, and the difficulties/tensions of processing
data in big data environments will be discussed as well.

For the third sub-question, the GDPR principles on automated-decision making, as


well as the OECD principles will be dealt with in more detail. The requirements of the

13
Article 29 Working Party are relevant here: what needs to be done in order to lawfully
make an automated decision? Which OECD principles and GDPR principles seem to
overlap, and thus should be adhered to in order to enhance fairness and equality?
Linking back to the HireVue example, what were some of the complaints that resulted
in the opinion that they committed unfair practices? Discussing these questions (and
similar ones) will result in sketching the extent to which the abovementioned
legislations/frameworks enhance certain ethical notions, and what companies should
definitely avoid when using such tools.

14
Chapter 2
AI-powered recruitment and the notions of fairness and equality in recruiting

2.1. Introduction

It has been established above that this thesis aims to, amongst others, highlight and
discuss the extent to which AI-Tools conflict with ethical notions of fairness and
equality in recruitment. In order to achieve this aim, a common ground needs to be
set. Consequently, a general discussion regarding the AI-Tools will occur first,
explaining their common grounds and how they relate to recruitment in general.
Afterwards the ethical notions of fairness and equality in the field of recruitment in
general. This descriptive chapter will, as a result, form the necessary basis to answer
this thesis’s research question.

2.2. AI-Tools: what are they?

Before exploring the relationship between AI-Tools and the abovementioned notions
of recruitment, it is relevant to explain what these AI-Tools entail (i.e. what are their
commonalities, what do they aim to achieve, what are some examples, etc.).

AI-Tools are software assets created by its producers, often implemented into a
customer’s recruitment process. The candidate selection process transforms digitally
from merely sending a CV and motivation letter into having the candidate go through
various assessments (such as culture fit tests, personality tests, language tests, video
interviews measuring emotions, etc.). The goal is to automate tasks that recruiters
might deem repetitive. Many AI-Tools exist, and while similarities exist, their
technological sophistication may vary. In any case, the common ground of these AI-
Tools is the desire to automate and digitalize the pre-employment assessment,
eliminating the CV/motivation letter method.

Another commonality of these tools is that they use AI in one way or another to make
decisions regarding candidates. So far, the term AI has not been defined in this thesis.
Defining this term is especially relevant now, since it will determine the scope of any

15
future EU Regulatory Framework.22 According to the European High Level Expert
Group, AI can be defined as software (and possibly also hardware) systems that are
designed by humans and that, given a complex goal, “act in the physical or digital
dimension by perceiving their environment through data acquisition, interpreting the
collected structured or unstructured data, reasoning on the knowledge, or processing
the information, derived from this data and deciding the best action(s) to take to
achieve the given goal.”23 Said in different words, the main elements of AI are “data”
and “algorithms”: algorithms are trained with data to infer certain patterns, ultimately
determining automatically the actions that are needed to achieve a specific goal.
Linking this to recruitment tools: a manufacturer develops a software which has
various algorithmic modules. This can, for instance, be a module measuring the
proficiency of English of a job applicant. Training data is initially provided to the
algorithm, followed by data from the job applicant (here, it would be a sample of their
voice, reading out a sentence). The algorithm is then able to make decisions regarding
the proficiency of the job applicant.

Moving back to AI-Tools, oftentimes a personality/psychology test is included, as well


as language proficiency or IQ tests. The option to include modules such as video
interviewing is also commonly present. The vibe given to the candidates is that the
recruitment process is a “gamified experience”, supposed to be fun to go through.

AI-Tools are part of the increasing trend of new computing platforms. Technological
advancements have enabled vendors to open their infrastructure technologies to other
companies.24 As a result, Software-as-a-Service (SaaS) tools have truly expanded
worldwide, enabling vendors to license their software to anyone in the world. The aim
of this thesis is not to discuss this occurrence as such, nor its legal implications, but
this global increase of cloud computing is actually one of the characteristics of AI-
Tools. SaaS tools such as AI-Tools are easy to get access to: the customer does not need

22 European Commission, “WHITE PAPER On Artificial Intelligence – a European


approach to excellence and trust” (European Commission, 19 February 2020)
https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-
intelligence-feb2020_en.pdf (accessed May 29, 2020), p. 16
23 High Level Expert Group, A definition of AI, p. 8.
24 Michael Cusumano (2010), Cloud Computing and SaaS as New Computing
Platforms, Communications of the ACM Volume 53, Issue 4.

16
to install any servers or build any software, since this is provided by the vendor. Cloud
Computing makes software attractive, easy, and shapes many areas of any type of
company, including the field of recruitment.25 After all, when a company buys the
license of an AI-Tool, that company does not need to concern itself with building or
implementing the AI-Tool into its recruitment process. In this age of digitalization,
tools such as AI-powered recruitment software have the potential to reach many
customers. AI-Tools therefore have the potential to be the leading actor of all matters
regarding the field of recruitment.

One concrete example of such AI-Tools is the Harver Platform.26 Harver describes
itself as a “full suite candidate selection platform designed to enable innovative
companies around the world to hire better and faster.”27 The platform enables to create
a unique selection process which includes interactive assessments and many other
unique features used to make decisions regarding a candidate. AI algorithms are used
in the platform to give a percentage to the job applicant: the higher the percentage, the
more likely there is a fit between the job applicant and the open vacancy. HireVue is
yet another example of a recruitment tool.28 Similarly to the Harver Platform, HireVue
utilizes predictive assessments and AI technologies during the recruitment process.
More specifically, HireVue is known for its sophisticated video interview module.
Their module is able to go further than merely providing a platform to record a video
interview: their AI technology is able to identify emotions of job applicants. Other than
these examples, many other tools exist that might differ in the level of technological
sophistication. Tools such as these aim to eliminate the classical way of recruitment:
using resumes/CVs.

In the upcoming chapters, it will be clear how tools such as these can be problematic.
As mentioned above, the level of sophistication can vary per tool. HireVue, as
mentioned above, developed very advanced AI technologies which can actually
recognize the emotions on candidates’ faces via their video interview module. While

25 Sumit Goyal (2014), Public vs Private vs Hybrid vs Community – Cloud


Computing: A Critical Review, I.J. Computer Network and Information Security, 3,
20.
26 More information: https://www.harver.com
27 Ibid.
28 More information: https://www.hirevue.com

17
the Harver Platform does not utilize emotion recognition in their video interview
module, their personality test could prove to produce biased results based on which a
decision is made. However, regardless of the sophistication of the AI-Tools, the
overarching conclusion will always remain that using such AI technologies in
recruitment can lead to undermining fairness and equality.

2.3. The ethical notions of fairness and equality in light of the area of
recruitment

Moving on from the AI-Powered tools, the ethical notions of fairness and equality will
be discussed in this section in the context of recruitment and algorithmic prediction
tools such as AI-Powered Tools. Exploring these notions is necessary to provide a
common ground based on which the AI-Powered Tools are compared.

The term ‘fairness’ can be classified as a social and ethical concept, often with a
subjective touch.29 In relation to predictive instruments such as AI-Powered Tools, the
fairness notion requires such tools to be free of predictive bias.30 In turn, predictive
bias goes hand in hand with assessment bias. Such biases occur when an assessment
instrument (i.e. AI-Powered Tools) unfairly penalizes a certain group because of the
subject’s gender, race, ethnicity, religion or other defining characteristics, i.e.
assessment bias.31 Predictive bias occurs when an assessment instrument does not
predict the same results with similar accuracy across various groups of people. 32 In
other words and for example, two job applicants with the same merits should not
receive non-identical outcomes from the assessment instrument because of their
different ethnic backgrounds. If such biases exist within an assessment instrument,
one can deem it to be unfair. Such unfair instruments could make way for inequality –
for the purposes of this thesis, equality shall mean the state of being equal (i.e. having
equal opportunities, regardless of race, ethnic background, religion, sexual

29 Chouldechova, A. "Fair prediction with disparate impact: A study of bias in


recidivism prediction instruments." Big data 5.2 (2017): 153-163
30 Ibid.
31 W. James Popham, “Assessment Literacy for Educators in a Hurry”, ASCD (2018):
127
32 Jennifer Skeem and Christopher Lowenkamp, “Risk, Race and Recidivism:
Predictive Bias and Disparate Impact.” University of California, Berkeley (2016)

18
orientation, etc.). As such, discrimination is prone to occur, which can be defined as
the unjust treatment of certain groups of people due to their inherent characteristics.

Assessment instruments risk being biased when they have been given training data
which includes the society’s structural biases - thus being unfair, increasing
discrimination and not promoting equality.33 Concretely and for example, this could
mean that AI-Powered Tools favor men over women for a certain position that is
dominated by men, as was the case with Amazon’s abovementioned assessment
instrument. A point of interest linked to this discussion is that such tools are often
protected with trade secrets, making it difficult to measure if/why they are biased. 34
Not to mention that a fully automated process reduces the human involvement (which,
arguably, could reduce human bias, but perhaps increase bias in general due to a
biased algorithm since human intervention is lacking).

Turning back specifically to AI-Powered Tools, the high level of the technology’s
sophistication makes it difficult to understand the algorithm fully.35 This goes hand in
hand with the fact that such algorithms are often protected with trade secrets. The lack
of transparency regarding how such tools make decisions and the overall complexity
of such tools play a role in making it difficult to reduce the potential of predictive and
assessment bias occurring.

While the importance of the ethical notions of fairness and equality is generally
undisputed – also in the area of recruitment, citizens of even the most liberal and
democratic of countries might be faced with situations that undermine those notions.
For instance, a 2013 study publication shows that Arabic-named Dutch citizens face
discrimination much more often than Dutch-named applicants when applying for (the
same) vacancies online.36 An amount of 636 fictitious resumes had been posted on two

33 Ifeoma Ajunwa, Sorelle A. Friedler, Suresh Venkatasubramian, “Hiring by


Algorithm: Predicting and Preventing Disparate Impact.” (2016).
34 Ibid.
35 Boyd, D, Levy, K., & Marwick, A. E. (2014). The networked nature of algorithmic
discrimination. In Data and discrimination: Collected essays (pp. 43–57).
Washington, DC: Open Technology Institute.
36 Lieselotte Blommaert, Marcel Coenders, Frank van Tubergen Discrimination of
Arabic-Named Applicants in the Netherlands: An Internet-Based Field Experiment

19
online resume databases, and strong discrimination occurred during the recruiter’s
decision to view the applicants’ complete resume after examining a short profile.37 In
fact, Dutch-named applicants seemed to be 60 percent more likely to receive a positive
reaction than Arabic-named applicants.38

The study further shows how ethnic minorities in Europe are in a less favorable
position in the labor market than the rest of the population. This conclusion is
relevant, but more interesting for this thesis is the discrimination in recruitment.
Ethnic minorities face unequal treatment when applying for a job compared to natives
with comparable competencies. 39 Many reasons have been identified for the
occurrence of such discrimination, and the Dutch anti-discrimination laws have not
always produced the desired outcomes.40 As long as there is human bias,
discrimination in recruitment has the potential to occur.41

One conclusion that can be drawn is that, oftentimes, unfairness and inequality (i.e.
discrimination) is frowned upon when recruiting. Not eliminating discrimination in
the field of recruitment is seen as one of many reasons as to why such an ethnic gap
exist in the labor market.42 This ethnic gap can broaden further with ongoing
discrimination in the field of recruitment, with a higher chance of unemployment. 43
Consequently, (ethnic) minorities face more difficulties when trying to adapt to the
majority, oftentimes also resulting in unemployment. Due to unemployment,
individuals are faced with much more issues when trying to satisfy their (basic) human
needs.44 Despite technology being neutral, AI-Powered Tools have the potential to

Examining Different Phases in Online Recruitment Procedures, Social Forces,


Volume 92, Issue 3, March 2014, Pages 957-982.
37 Ibid, p. 957.
38 Ibid.
39 National Research Council (2004), Measuring Racial Discrimination, edited by
Rebecca M. Blank, Marilyn Dabady, and Constance F. Citro. Committee on National
Statistics, Division of Behavioral and Social Sciences and Education.
40 Tetty Havinga, The effects and limits of anti-discrimination law in The
Netherlands, International Journal of the Sociology of Law 30 (2002), 75-90.
41 Ibid.
42 Agnieskza Kanas, Frank van Tubergen, and Tanja van der Lippe (2011), The Role
of Social Contacts in the Employment Status of Immigrants: A Panel Study of
Immigrants in Gerrmany, International Sociology 26(1), 96.
43 Ibid.
44 Ibid.

20
amplify such discrimination. For instance, if the society discriminates against such
Arabic-named applicants on a large scale, the training data of an algorithm has a
higher chance of being less diverse. This could ultimately result in predictive bias by
an AI-Powered Tool.

Discrimination in recruitment can occur willingly or unwillingly and is often linked to


human bias.45 Quite interestingly, even those advocating fairness and equality have
the potential to discriminate when conducting a pre-employment assessment if one
takes into consideration the discussion above regarding biased AI-Powered Tools.46
For instance, the idea that women are unable to perform as well as men has been
ingrained into certain people’s minds.47 Similarly, others might have an antipathy
towards effeminate men.48 Either humans (through human bias) or AI-Powered Tools
(through biased training data) could enforce these examples.

While this section focused on gender and ethnic minorities, a line can also be drawn
to other minorities such as LGBT+ applicants, applicants with disabilities, applicants
with a different religious background, etc. All risk to face the same discrimination
when notions such as fairness and equality are not respected in the field of
recruitment. Conclusively, one can easily argue that notions such as fairness and
equality are vital in the field of recruitment, and that discrimination occurs when these
notions are not adhered to. The starting point of this thesis, taking into consideration
the discussion in this section, is that recruitment is truly fair and equal when
applicants are not judged (and rejected) based on their inherent characteristics, but
on skills and merit.

45 Pager Devah, and Hana Shepherd (2008), The Sociology of Discrimination:


Racial Discrimination in Employment, Housing, Credit, and Consumer Markets,
Annual Review of Sociology 34, 182.
46 Ibid.
47 Joseph G., Rebecca M. Blank (1999), Race and Gender in the Labor Market,
Handbook of Labor Economics 3c, edited by Orley C. Ashenfelter and David Card,
3143.
48 Ibid.

21
Chapter 3
Tensions when using AI-powered tools in the field of recruitment

3.1. The HireVue example – what can go wrong?

After having given meaning to the ethical notions of fairness and equality in light of
the area of recruitment, this chapter aims to highlight the potential ethical tensions
and implications when using AI-powered tools in this field. One might believe that
these software will generally try their best to ensure that their recruitment software
is in compliance with the necessary (data protection) laws. However, the following
example will illustrate how an AI-powered recruitment software can fall short in
terms of compliance.

In November 2019, a recruiting/employment screening company called HireVue


became under scrutiny after allegedly engaging in unfair and or deceptive trade
practices.49 EPIC is one of the tech industry’s renowned ‘digital’ watchdogs based in
the US. The group has challenged tech giants and government agencies, including
Facebook, Google, and the National Security Agency, through consumer complaints,
agency fillings, and federal lawsuits.50 Epic filed a complaint with the Federal Trade
Commission (FTC) alleging that HireVue has committed unfair and deceptive
practices in violation of the FTC Act: job applicant’s qualifications are evaluated
“based upon their appearance by means of an opaque, proprietary algorithm.”51

HireVue provides video-based assessments such as scenario-based simulations or


situational judgment questions with which its customers can screen potential job

49 Electronic Privacy Information Center (EPIC), EPIC Files Complaint with FTC
about Employment Screening Firm HireVue (November 6, 2019),
https://epic.org/2019/11/epic-files-complaint-with-ftc.html, accessed February 25,
2020.
50 More information regarding EPIC can be found on their website: www.epic.org.
51 Federal Trade Commission (FTC), Complaint and Request for Investigation,
Injunction, and Other Relief (November 6, 2019),
https://epic.org/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf, accessed
February 25, 2020.

22
candidates.52 The issue, however, is that HireVue “denies that it is engaged in facial
recognition and has failed to show that its technique meets the minimal standards
for AI-based decision-making set out in the OECD AI Principles or the recommended
standards set out in the Universal Guidelines for AI.”53 Initially, the use case of this
tool was simply to record answers to interview questions and to have real-time online
interviews. Probably, in a strategy to create more value and to stay ahead of the
competition, HireVue added an AI Layer to their tool that, as they claim, can predict
the quality of the applicant through measuring, among others, candidate
expressions.54

3.2. The Complaint by EPIC – undermining principles and bias by AI

In the complaint that was filed by EPIC, it was argued that HireVue failed to comply
with baseline standards for AI decision-making in this tool. Said differently, HireVue
is not screening the applicants in a way that is fair, explainable or defensible
according to the complaint. Regarding algorithms that utilize AI in the decision-
making process, EPIC has a clear stance: the public has a right to know the factors
that provide the basis for decisions.55 Furthermore, EPIC considers “algorithmic
transparency” as crucial when defending human rights and democracy online.56 In
case of HireVue, this algorithmic transparency seemed to lack, making way for risks
such as secret profiling and discrimination. After all, EPIC claims that without
knowledge of the factors that provide the basis for decisions, it is impossible to know
whether companies engage in practices that are deceptive, discriminatory, or
unethical when recruiting.57

52 HireVue, How to Prepare for Your HireVue Assessment (April 16, 2019),
https://www.hirevue.com/blog/how-to-prepare-for-your-hirevue-assessment,
accessed February 25, 2020.
53 Ibid, n. 51
54 More information regarding HireVue’s tool can be found on their website:
www.hirevue.com.
55 EPIC, Algorithmic Transparency: End Secret Profiling (not dated),
https://epic.org/algorithmic-transparency/ (accessed February 26, 2020).
56 Ibid.
57 Ibid.

23
As stated before, HireVue offers video-based and game-based pre-employment
assessments of job candidates on behalf of their customers.58 It collects “tens of
thousands of data points”59 from each video interview of a job candidate, including
but not limited to a candidate’s “intonation”, “inflection”, and “emotions”.60 These
and many other data are then put into “predictive algorithms"61 that determine a job
candidate’s “employability".62 These algorithmic assessments reveal the cognitive
ability, psychological traits, emotional intelligence, etc. of job candidates based on
which a decision is made.

“Profoundly disturbing”63 is what outside experts call HireVue’s usage of AI to decide


about job candidates. The experts highlight that this blend of “superficial
measurements and arbitrary number-crunching that is not rooted in scientific fact"64
could penalize non-native speakers or those who are visibly nervous or anyone else
who does not fit the model for look and speech. This emotion detecting facial
recognition technology is therefore against the notions of fairness when
employability is based merely on the abovementioned factors, since socializing skills
influence the outcome more than the actual skills demanded by the job.

Moving back to EPIC’s complaint, one of the major grounds for the complaint is that
HireVue does not give candidates access to their algorithmic assessment scores, nor
access to the training data, factors, logic or techniques used to generate each
algorithmic assessment.65 This undermines basic AI principles of transparency
stipulated in the OECD AI Principles and the Universal Guidelines for AI (more on
upholding principles in the next chapter).

58 HireVue, Pre-Employment Assessments, www.hirevue.com/products/assessments


(accessed February 26, 2020)
59 Ibid, n. 4.
60 Ibid.
61 Ibid.
62 Ibid.
63 The Washington Post, A face-scanning algorithm increasingly decides whether
you deserve the job (November 6, 2019),
https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-
algorithm-increasingly-decides-whether-you-deserve-job/ (accessed February 26,
2020)
64 Ibid.
65 Ibid.

24
What is more, one can argue that such sophisticated hiring algorithms are more
likely prone to be biased by default despite being marketed as recruiting tools that
eliminate (human) biases in the hiring process – for instance because more sensitive
data can be collected with more sophisticated technologies, which increases the
potential of bias occurring.66 More specifically, AI tools can enhance biases such as
gender biases: Amazon’s AI recruiting tool seemed to penalize resumes that included
the word “women’s” and the names of all-women’s colleges.67 Similarly, eye
movement tracking captured in video assessments could discriminate against
candidates with neurological differences. People with Autism Spectrum Disorder
have a tendency to look at people’s mouths rather than making eye contact.68 On the
same vein, it is also argued that facial recognition software is often racially biased: a
study showed that darker females were 32 times more likely to be misclassified than
lighter males69, while another study showed that black people’s faces are read as
angrier than white people’s faces.70

3.3. “Unfair” decisions and remarks regarding supplier accountability

AI-powered recruitment tools, when violating certain principles, make way for unfair
decisions. Transparency is key for AI-powered recruitment tools, and candidates
should be able to evaluate or understand the algorithmic assessments (if applicable)
according to the GDPR (more on this on the next chapter). Moreover, arbitrariness of

66 Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias, Harvard
Business Review (May 6, 2019), https://hbr.org/2019/05/all-the-ways-hiring-
algorithms-can-introduce-bias (accessed February 29, 2020)
67 Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against
women, Reuters (October 9, 2018), https://www.reuters.com/article/us-amazon-
com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-
bias-against-women-idUSKCN1MK08G (accessed March 12, 2020)
68 Corinne Green and Kun Guo, Factors contributing to individual differences in
facial expression categorisation (2018).
69 Joy Buolamwini, Gender Shades: Intersectional Phenotypic and Demographic
Evaluation of Face Datasets and Gender Classifiers, MIT (2017),
https://www.media.mit.edu/publications/full-gender-shades-thesis-17/.
70 Lauren Rhue, Racial Influence on Automated Perceptions of Emotions, Wake
Forest University (2018),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3281765 (accessed March 12,
2020)

25
algorithms can make it difficult for the assessment to be meaningfully challenged by
humans, which is another requirement of the GDPR.

Moving back to the HireVue case once more, it has been argued that their use of
biometric data and secret algorithms is unfair since it “causes or is likely to cause
substantial injury to applicants which is not reasonably avoidable by themselves
[…].”71 This is on top of the abovementioned points which potentially facilitate
discriminatory decisions, increasing inequality within the recruitment sector.

Having answered sub-questions one and two in chapter 2 and provided an example
in chapter 3, the upcoming chapter will explore how compliance with the relevant
GDPR clauses and OECD AI Principles might reduce the abovementioned issues.

71Electronic Privacy Information Center (EPIC), EPIC Files Compaint with FTC
about Employment Screening Firm HireVue (November 6, 2019),
https://epic.org/2019/11/epic-files-complaint-with-ftc.html, accessed February 25,
2020.

26
Chapter 4
GDPR & AI Principles – will adherence and compliance to them lead to enhanced
fairness of AI Tools?

4.1. Introduction

Implementing AI Tools throughout the pre-employment assessment process seems to


generally provide for efficiency for (inhouse) recruiters, as discussed above in this
thesis. It has also been established what the notions of fairness and equality mean in
light of the area of recruitment and the selection process. AI Tools, depending on their
characteristics and the sophistication of their technology, can provide for tensions
with these notions when hiring. This chapter aims to go further by bringing together
legislations and (AI) guidelines into the discussion. The goal is to discuss the GDPR,
OECD principles and certain AI guidelines in relation to AI Tools: what general
principles, rules and data-subject rights do these legal documents have in common?
Consequently, the common requirements of the abovementioned legal sources with
regards to AI Tools will be highlighted.

4.2. The General Data Protection Regulation: sensitive data, automated


decision making, data subject rights

4.2.1. Automated Decision Making clashing with AI Tools


Under EU law, automated data processing concerns operations performed by
“personal data wholly or partly by automated means”. A similar definition is given in
the Council of Europe’s Modernised Convention 108. In practical terms, this means
that any personal data processing through automated means with the help of, for
example, a personal computer, a mobile device, or a router, is covered by both EU and
CoE data protection rules.72

Automated individual decision-making is, in principle, prohibited under the GDPR.


The prohibition is formulated in Article 22(1) GDPR as follows:

European Union Agency for Fundamental Rights and Council of Europe,


72
Handbook on European data protection law, 2018 edition, p. 99.

27
‘The data subject shall have the right not to be subject to a decision based
solely on automated processing, including profiling, which produces
legal effects concerning him or her or similarly significantly affects
him or her.’

Using an example, the relationship between this prohibition and AI Tools will be
highlighted. Harver’s73 AI Tool contains a speech analyzation module, which
ultimately uses AI algorithms to scan a brief recording of a candidate’s speech. This is
to measure the candidate’s level of proficiency of English. The danger of speech
modules such as these is that it could fall within the scope of automated decision
making. Important elements in Article 22(1) GDPR are ‘solely’, ‘automated
processing’, ‘legal effects’, and ‘similarly significantly affects him or her’.
We can conclude that automated decisions (i.e. decisions without human intervention)
based on solely speech analyzation, like Harver’s speech module, fall within the
meaning of this prohibition. Lacking human intervention, Harver’s speech module is
wholly automated and its results significantly affect applicants.74 ‘Similarly significant’
effects need not necessarily be legal ones – the Article 29 Working Party suggests that
the threshold is the significance of the decision’s impact on the data subject – so to
qualify, the processing must be ‘more than trivial, […] the decision must have the
potential to significantly influence the circumstances, behaviour, or choices of the
individuals concerned.’75 Thus, a rejection based on Harver’s speech module
constitutes automated individual decision making under the GDPR and is, in
principle, prohibited. The GDPR itself confirms this in recital 71, which gives ‘e-
recruiting practices without any human intervention’ as an example of prohibited
automated individual decision making.

73 See the “Product” page of this website: https://harver.com/


74 Employment is vital for providing oneself with basic necessities. Denying someone
of employment affects their lives in many ways: integrating into a society and
fulfilling basic needs become much more difficult. The AI Tool (more specifically
here the speech module) is the deciding factor whether a candidate is fit for the role,
meaning that it effectively produces results that significantly affects applicants. This
is also why the GDPR’s prohibition of automated decision-making contains the
human intervention exception.
75 Article 29 Working Party Guidelines on Automated Decision-Making and Profiling.

28
Taking a closer look at Article 22 GDPR, one can see that it provides for exceptions to
the prohibition in its sections 2 and 4. The prohibition in section 1 does not apply if
the decision is based on the data subject’s explicit consent. Once an applicant provides
the hiring company (i.e. the data controller) with explicit consent prior to applying
through the AI Tool, such a speech module can be used legally. Even if speech qualifies
as biometric data under the GDPR, Article 22(4) provides for explicit consent to be a
legal ground for data processing. It will appear in subsection 4.2.2. below that
obtaining explicit consent is not always easily achieved. Important to note, however,
is that additional suitable measures must be taken to safeguard the data subject’s
rights and freedoms and legitimate interests. Furthermore, job applicants should have
the right to obtain human intervention on the part of the controller (i.e. the hiring
companies, so not necessarily the AI Tool producers).

4.2.2. (Explicit) Consent – a brief discussion


The concept of consent is one of the six legal grounds for processing personal data.
Article 4(11) of the GDPR describes the elements of valid consent. It shall be freely
given, specific, informed, and unambiguous. The freely given element is arguably the
most interesting element in the context of AI Tool recruitment. The implication with
the “free” element is that there should be a real choice and control for data subjects. 76
Consent is therefore not deemed valid if the data subject feels compelled to consent,
or if the data subject cannot refuse consent without detriment. 77 Relevant here is the
extent of an imbalance of power. In their guidelines on consent, the Article 29 Working
Party explicitly mentions that an imbalance of power occurs in the employment
context (among other contexts).78 The dependency resulting from the
employer/employee relationship makes it that denying consent cannot occur without
detriment. What is more, the Article 29 Working Party finds it problematic when
personal data of future employees are processed on the basis of consent. When reading
these guidelines, one might at first glance believe that consent required from
applicants via AI Tools are never valid, since this falls within the employment context.
The Article 29 Working Party is arguably unclear with its explanation: does the
mentioned imbalance of power in the employment context also extend to the context

76 Article 29 Working Party Guidelines on Consent.


77 Ibid, p. 6.
78 Ibid, p. 7.

29
of AI Tools? Perhaps this is why AI Tool manufacturers often explicitly mention that
their software is created for the “pre-employment assessment” stage, highlighting that
there exists no employee/employer relationship (yet) and therefore also no imbalance
of power.79

Moving further, explicit consent is an exception to the prohibition of processing


sensitive data and automated decision-making (Article 4 and 22 GDPR respectively).
Importantly, explicit consent is not achieved as easily as regular consent. Explicit
consent is necessary in situations where serious data protection risks emerge. 80 While
a statement or clear affirmative action is necessary for regular consent, explicit
consent requires that the applicant expressly confirms consent in a written
statement.81 In the AI Tool context, this explicit consent can be obtained by exposing
applicants to an explicit consent screen before any data is being processed. The data
controller, which is the hiring company in these cases, must be able to demonstrate
that explicit consent was obtained.

4.2.3. Sensitive/Special and Biometric Data


Under the GDPR, there are special categories of personal data which, by their nature,
may pose a risk to the data subjects when processed and need enhanced protection.
Such data are subject to a prohibition principle and there is a limited number of
conditions under which such processing is lawful.82 The GDPR defines special
categories of data as follows in Article 9:

“[...] personal data revealing racial or ethnic origin, political opinions,


religious or philosophical beliefs, or trade union membership, and the
processing of genetic data, biometric data for the purpose of uniquely
identifying a natural person, data concerning health or data concerning a
natural person’s sex life or sexual orientation shall be prohibited.”

79 For example, the first sentence one reads when visiting the following website is
“pre-employment assessment software for volume hiring” https://harver.com/
80 Ibid.
81 Ibid.
82 Publications Office of the European Union, Handbook on European data
protection law, 2018 edition, p. 96.

30
Biometric data is included in the special categories of personal data listed under Article
9 GDPR. Due to the effects of that article, the processing of such special categories is
prohibited, unless one of the exceptions under Article 9(2) applies. The only legal basis
that hiring companies could rely on in order to process biometric data would be the
data subject’s explicit consent. Once data subjects (read: applicants) have granted the
hiring company their explicit consent, the processing prohibition of biometric data is
exempted, and the personal data can be processed lawfully.

Continuing with the example given above, voice recordings fall within the GDPR’s
meaning of ‘biometric data’, which—in turn—is a special category of personal data of
which the processing is in principle prohibited. The GDPR defines biometric data as
‘personal data resulting from specific technical processing relating to the physical,
physiological or behavioral characteristics of a natural person, which allow or confirm
the unique identification of that natural person, such as facial images or dactyloscopic
data.’83 Voice recordings are not explicitly listed in this definition, but that does not
mean that they do not qualify as such.

Both the UK and the Netherlands’ data protection authorities qualify voice recordings
as biometric data. In January 2017, the British tax authority (HMRC) adopted a voice
authentication which asked callers to some its helplines to record their voice as their
password. However, they were not given further information or advised that they did
not have to sign up to the service. HMRC lacked the explicit consent from its customers
and were told by the ICO to delete any data it continued to hold without consent. 84 In
its enforcement notice, the ICO qualified voice recordings as biometric data under the
GDPR. A similar case has not (yet) happened in the Netherlands, but the Dutch
authority has stated on its website that it considers voice recordings as biometric
data.85

83 Article 4(14) GDPR.


84 ICO Enforcement Notice to HRMC, https://ico.org.uk/media/action-weve-
taken/enforcement-notices/2614924/hmrc-en-201905.pdf (accessed March 29,
2020)
85 This information can be found in the Dutch Data Protection Authority (AP):
https://autoriteitpersoonsgegevens.nl/nl/onderwerpen/identificatie/biometrie
(accessed May 3, 2020)

31
This means that voice recordings qualify as biometric data under the GDPR and are
therefore a special category of data of which the processing is, in principle, generally
prohibited. The only way for hiring companies to make use of AI Tools that include
voice analyzation tools in the EU/EEA is to acquire data subjects’ (read: applicants’)
explicit consent prior to their voices being recorded.

Similarly, AI Tools containing modules process sensitive personal data by processing


videos, photos, etc. of candidates. After all, it is personal data which could reveal much
more sensitive data about an individual. According to the UK’s data protection
authority, sensitive personal data “could create more significant risks to a person’s
fundamental rights and freedoms. For example, by putting them at risk of unlawful
discrimination.”86 While this shows that enhanced protection is needed with regards
to special categories of personal data, it does highlight that technological
advancements are not always necessarily desired. The more sophisticated and
advanced an AI Tool is (i.e. implementing novel solutions, such as voice analyzation),
the higher the possibility of processing sensitive personal data.

4.3. OECD Principles on AI and the Universal Guidelines for AI

4.3.1. OECD Principles on AI


The OECD Recommendations are not legally binding and were adopted by the OECD
Council at Ministerial level on May 22, 2019. The Recommendation is the first
intergovernmental standard on AI and aims to increase innovation and trust in AI by
promoting the responsible and trustworthy use of AI while ensuring respect for human
rights and democratic values. The Recommendation provides minimal standards for
AI-based decision making.

The Recommendation centres around five values-based principles for responsible and
trustworthy use of AI and calls on AI actors to promote and implement them. These

86This information can be found in the UK Data Protection Authority (ICO)


https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-
data- protection-regulation-gdpr/lawful-basis-for-processing/special-category-data/
(accessed May 3, 2020)

32
principles are elaborated on in Section 1 of the Recommendation. Section 2 of the
Recommendation provides policy makers with additional principles to take into
account while implementing national policies and international co-operation. All the
principles set out in Section 1 of the OECD Recommendation are relevant to AI Tools
and their producers/buyers in one way or another. They are deemed relevant for
reducing predictive and assessment bias and are formulated as follows:

Principle of inclusive growth, sustainable development and well-being


Stakeholders should proactively make efforts to enable responsible and trustworthy
AI which will provide beneficial outcomes for people and the planet, such as
augmented human capabilities and enhancing creativity, advancing inclusion of
underrepresented populations, reducing economic, social, gender and other
inequalities, and protecting natural environments, thus invigorating inclusive growth,
sustainable development and well-being.

Principle of human-centred values and fairness


a) AI actors should respect the rule of law, human rights and democratic values,
throughout the AI system lifecycle. These include freedom, dignity and
autonomy, privacy and data protection, non-discrimination and equality,
diversity, fairness, social justice, and internationally recognised labour rights.
b) To this end, AI actors should implement mechanisms and safeguards, such as
capacity for human determination, that are appropriate to the context and
consistent with the state of art.

Principle of transparency and explainability


AI Actors should commit to transparency and responsible disclosure regarding AI
systems. To this end, they should provide meaningful information, appropriate to the
context, and consistent with the state of art:
i. to foster a general understanding of AI systems,
ii. to make stakeholders aware of their interactions with AI systems, including
in the workplace,
iii. to enable those affected by an AI system to understand the outcome, and,
iv. to enable those adversely affected by an AI system to challenge its outcome
based on plain and easy-to-understand information on the factors, and the

33
logic that served as the basis for the prediction, recommendation or
decision.

Principle of robustness, security and safety


a) AI systems should be robust, secure and safe throughout their entire lifecycle
so that, in conditions of normal use, foreseeable use or misuse, or other adverse
conditions, they function appropriately and do not pose unreasonable safety
risk.
b) To this end, AI actors should ensure traceability, including in relation to
datasets, processes and decisions made during the AI system lifecycle, to enable
analysis of the AI system’s outcomes and responses to inquiry, appropriate to
the context and consistent with the state of art.
c) AI actors should, based on their roles, the context, and their ability to act, apply
a systematic risk management approach to each phase of the AI system lifecycle
on a continuous basis to address risks related to AI systems, including privacy,
digital safety and bias.

Principle of accountability
AI actors should be accountable for the proper functioning of AI systems and for the
respect of the above principles, based on their roles, the context, and consistent with
the state of art.

4.3.2. The Universal Guidelines for AI


The Universal Guidelines of AI (UGAI) is a framework for AI governance based on the
protection of human rights, were set out at the 2018 meeting of the International
Conference on Data Protection and Privacy Commissioners in Brussels, Belgium. The
rise of AI decision-making implicates fundamental rights of fairness, accountability,
and transparency. The UGAI were designed to inform, to improve the design and use
of AI, to maximise the benefits of AI, to minimise the risk, and to ensure the protection
of human rights. The Guidelines have been endorsed by 292 experts and 64
organisation representing over 30 countries, an important step in acknowledging the

34
potentially negative consequences use of AI technology can cause.87 The guidelines are
relevant to AI Tool producers, since the document states that “the primary
responsibility for AI systems must reside with those institutions that fund, develop,
and deploy these systems.”

The document consists of 12 Guidelines, which are all relevant to AI Tool producers
and users, and are described as follows:88

1. Right to transparency
All individuals have the right to know the basis of an AI decision that concerns them.
This includes access to the factors, the logic, and techniques that produced the
outcome.

2. Right to human determination


All individuals have the right to a final determination made by a person.

3. Identification obligation
The institution responsible for an AI system must be made known to the public.

4. Fairness obligation
Institutions must ensure that AI systems do not reflect unfair bias or make
impermissible discriminatory decisions.

5. Assessment and accountability obligation


An AI system should be deployed only after an adequate evaluation of its purpose and
objectives, its benefits, as well as its risks. Institutions must be responsible for
decisions made by an AI system.

87 This website shows the countries that have endorsed the guidelines:
https://thepublicvoice.org/AI-universal-guidelines/endorsement/ (accessed June 2,
2020)
88 The Public Voice, ‘Universal Guidelines for Artificial Intelligence’ (The Public
Voice, 23 October 2018) https://thepublicvoice.org/ai-universal-guidelines/
(accessed June 2, 2020)

35
6. Accuracy, reliability, and validity obligations
Institutions must ensure the accuracy, reliability and validity of decisions.

7. Data quality obligation


Institutions must establish data provenance, and assure quality and relevance for the
data input into algorithms

8. Public safety obligation


Institutions must assess the public safety risks that arise from the deployment of AI
systems that direct or control physical devices, and implement safety controls.

9. Cybersecurity obligation
Institutions must secure AI systems against cybersecurity threats.

10. Prohibition on secret profiling


No institution shall establish or maintain a secret profiling system.

11. Prohibition on unitary scoring


No national government shall establish or maintain a general-purpose score on its
citizens and residents

12. Termination obligation


An institution that has established an AI system has an affirmative obligation to
terminate the system if human control of the system is no longer possible.

4.4. The Common Ground

In its white paper on artificial intelligence published on 19 February 2020 89, the
European Commission (“the Commission”) set out some policy options with regards
to AI. So far in this chapter, the relevant laws and guidelines have been highlighted.

89European Commission, “WHITE PAPER On Artificial Intelligence – a European


approach to excellence and trust” (European Commission, 19 February 2020)
https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-
intelligence-feb2020_en.pdf (accessed May 29, 2020)

36
Also taking into consideration the points made by the Commission in their white
paper, one can draw conclusions with regards to common themes amongst all the
documents discussed above.

Confirming the point made above about how sophisticated technological


advancements do not always lead to positive situations for data subjects, the
Commission mentions that improvements in AI are not always risk-free.90 The
Commission highlights that there is no sufficiently or adequately adapted regulatory
framework with regards to AI, followed by mentioning that specifically designed
legislation should exist. The Commission then describes the risks for individuals with
regards to their fundamental rights.

For starters, the lack of transparency/opacity of AI. This is due to AI developers who
wish to protect their proprietary algorithms.91 Indeed, AI Tools and their modules are
often the result of long-time investments and hard work. From a business standpoint,
AI Tool producers might reasonably feel that their innovative ideas/products should
be safeguarded. This, however, clashes with the notion of transparency. Secondly, the
Commission mentions that the Big Data element of AI (i.e., processing
massive/excessive amounts of personal data) clashes with data protection law
principles such as proportionality and purpose limitation. 92 Lastly, the lack of human
intervention/supervision is a point of risk according to the Commission.93 In fact, the
Commission argues that any AI system should contain human intervention,
irrespective of whether personal data is processed or not.

It appears that a common ground does exist with regards to the legal framework of AI
Tools. Great importance is given to the notion of transparency and human
intervention. Similarly, bias is not tolerated and data quality/reliability is mandatory.
Indeed, upholding any of the guidelines or the GDPR would, for instance, result in
avoiding a “HireVue situation” (Chapter 3). This is due to the right of transparency
which is a requirement under the guidelines and the GDPR. Data subjects have the

90 Ibid.
91 Ibid

93 Ibid.

37
right to know the basis of an AI decision that concerns them, which includes the
factors, logic and techniques that produced the decision. One of the main issues
highlighted in the HireVue case was the lack of transparency. In fact, the complaint
against HireVue contained explicitly that “algorithmic transparency” is crucial when
defending human rights and democracy online. It appears, then, that it is no
coincidence that the principle of transparency is a common ground amongst the
guidelines and the GDPR – it truly is an essential principle.

One can conclude that the GDPR already incorporates a large amount of what has been
mentioned in the guidelines and in the Commission’s whitepaper. Automated decision
making and processing sensitive data are both conducts that are prohibited as well
(with exceptions), and the data subject rights provide for the necessary transparency
and bias-avoidance.

However, it is important to note that the GDPR encompasses a very broad area: it was
not specifically drafter for AI technology. AI technology such as AI Tools are much
more complex and novel than different technologies. While, again, the discussions
above showcase that the GDPR is quite protective, it can be problematic to assume
that the end goal has been reached.

While the GDPR has a great focus on data subjects, the OECD Recommendations
focuses on the AI technology itself, for instance by way of the sustainable development
principle. The sustainable development principle highlights how AI technologies
should prove to be beneficial for the society. Such a perspective does not exist within
the data subject centered GDPR. After all, the GDPR is primarily focused on enhancing
the rights of data subjects and not necessarily focused on a societal level. Similarly, the
concept of fairness is never explicitly mentioned in the GDPR, while it is a requirement
under the OECD Principles and the UGAI. Effectively, according to the OECD
Principles and the UGAI, fairness is achieved when AI does not reflect unfair bias or
make impermissible discriminatory decisions. Additionally, human determination is
required for fairness. The GDPR effectively provides for the principle of fairness as
well, albeit without specifically mentioning it. The requirement for human
intervention, for instance, is codified in the GDPR.

38
Conclusively, for now, it appears that a common ground exist between the GDPR and
the AI principles. The principle of transparency and the requirement of human
determination could, therefore, arguably deemed very important.

Chapter 5. Conclusion

In the OECD Recommendations, as well as the UGAI, the principle of fairness and
equality is mentioned explicitly and in greater detail. This is not the case with the
GDPR. While AI-powered pre-employment assessment tools do not conflict with
notions of fairness and equality by default, the level of sophistication of these AI Tools
could result in undermining fairness and equality. Training the AI Tools with biased
data, using highly complex emotion recognition tools, assessing the personality of
applicants and collecting biometric data of applicants are high-risk activities. Having
the AI Tool make automated decisions without human intervention is also deemed
unfair under the OECD Recommendation. Similarly, the complexity of the AI Tools
makes it difficult to explain to applicants how a decision had been made by the AI Tool,
causing a lack of transparency. The more sophisticated the AI Tool is, the more prone
it is to be biased and therefore unfair, which could enhance societal inequalities and
discrimination.

Furthermore, one can conclude that adherence to the GDPR, OECD


Recommendations and UGAI enhance fairness and equality. Automated decision
making and processing sensitive data are both conducts that are prohibited (with
exceptions) in the GDPR, while the OECD Recommendation and UGAI explicitly
require fairness with regards to AI technologies and data subjects.

However, arguably, technological development (e.g. developing emotion recognition


assessment modules) should not be hindered extensively. For instance, the GDPR
prohibits automated decision making by default, even if it were to provide positive
consequences to the applicant. This is due to the data subject centred nature of the
GDPR, and the fact that something is not necessarily bias-free (and thus fair) if it
provides positive consequences. Instead, in this recruitment context, the main point
of interest should be upholding fairness and equality. Having your AI Tool make

39
automated decisions does not necessarily have to mean that fairness and equality are
undermined. To this extent, the OECD Recommendation and the UGAI arguably
stipulate a much more compatible path, where principles of sustainable development
and inclusive growth require that AI should benefit people who, for instance, are
underrepresented.

The main research question of this thesis was as follows:

To what extent are AI-powered pre-employment assessment tools conflicting


with ethical notions of fairness and equality in recruitment, and how does
compliance with the GDPR and AI Principles enhance this fairness and
equality?

The more sophisticated an AI-Powered Tool is, the more it is prone to conflict with
ethical notions of fairness and equality. An AI-Powered Tool can be deemed unfair if
it is not free from predictive and assessment bias, ultimately leading to inequality
within the society (as discussed in Chapter 2). AI-powered pre-employment
assessment tools further conflict with ethical notions of fairness and equality in
recruitment when they are trained with biased data and when transparency lacks (i.e.
data subjects have no knowledge as to how a decision is made regarding them). It is
not necessarily the AI-Powered Tools “as is” that result in unfair situations. The
potential of training data enforcing society’s biases, the lack of transparency and the
lack of human intervention all play a role in the notions of fairness and equality being
affected negatively in this context. Linking back to the HireVue example once more,
the right’s group EPIC had a clear stance: the public must have the right to know the
factors that provide the basis for decisions. In fact, EPIC considers algorithmic
transparency as crucial when defending human rights and democracy online. The lack
of such transparency made way for risks such as secret profiling and discrimination,
the right’s group claimed.

It seems no coincidence, then, that a few requirements and points seem to repeat in
the GDPR, the AI principles, and the complaint filed by EPIC. Full adherence to the
human intervention principle and the transparency principle, while recognizing the
dangers of biased training data, will greatly enhance the notions of fairness and

40
equality in the context of recruitment with such AI-Powered Tools. Moreover, the
strict requirements of the GDPR regarding the processing of sensitive data put the data
subject in a stronger position. Yet, while adherence to the common points most
certainly enhance this fairness and equality, this adherence clashes with the desire of
AI-Powered Tool manufacturers to truly deliver the full automated experience: this
cannot happen if human intervention is required, and not upholding trade secrets
might lessen the incentive to manufacture such tools in the first place.

41
Bibliography

Literature
• Boyd, D, Levy, K., & Marwick, A. E. (2014). The networked nature of
algorithmic discrimination. In Data and discrimination: Collected essays (pp.
43–57). Washington, DC: Open Technology Institute.
• Chouldechova, A. "Fair prediction with disparate impact: A study of bias in
recidivism prediction instruments." Big data 5.2 (2017).
• Green C. and Kun Guo, Factors contributing to individual differences in facial
expression categorisation (2018).
• Ifeoma Ajunwa, Sorelle A. Friedler, Suresh Venkatasubramian, “Hiring by
Algorithm: Predicting and Preventing Disparate Impact.” (2016).
• Joseph G., Rebecca M. Blank (1999), Race and Gender in the Labor Market,
Handbook of Labor Economics 3c, edited by Orley C. Ashenfelter and David
Card.
• Kanas A, Frank van Tubergen, and Tanja van der Lippe (2011), The Role of
Social Contacts in the Employment Status of Immigrants: A Panel Study of
Immigrants in Gerrmany, International Sociology 26(1).
• Lauren Rhue, Racial Influence on Automated Perceptions of Emotions, Wake
Forest University (2018).
• Lieselotte Blommaert, Marcel Coenders, Frank van Tubergen Discrimination
of Arabic-Named Applicants in the Netherlands: An Internet-Based Field
Experiment Examining Different Phases in Online Recruitment Procedures,
Social Forces, Volume 92, Issue 3, March 2014.
• Michael Cusumano (2010), Cloud Computing and SaaS as New Computing
Platforms, Communications of the ACM Volume 53, Issue 4.
• National Research Council (2004), Measuring Racial Discrimination, edited
by Rebecca M. Blank, Marilyn Dabady, and Constance F. Citro. Committee on
National Statistics, Division of Behavioral and Social Sciences and Education.
• Pager Devah, and Hana Shepherd (2008), The Sociology of Discrimination:
Racial Discrimination in Employment, Housing, Credit, and Consumer
Markets, Annual Review of Sociology 34.

42
• Skeem J. and Christopher Lowenkamp, “Risk, Race and Recidivism: Predictive
Bias and Disparate Impact.” University of California, Berkeley (2016).
• Sumit Goyal (2014), Public vs Private vs Hybrid vs Community – Cloud
Computing: A Critical Review, I.J. Computer Network and Information
Security, 3.
• Tetty Havinga, The effects and limits of anti-discrimination law in The
Netherlands, International Journal of the Sociology of Law 30 (2002).
• W. James Popham, “Assessment Literacy for Educators in a Hurry”, ASCD
(2018).
• Wendy Schrama, ‘How to carry out interdisciplinary research’, Utrecht Law
Review, Volume 7 Issue 1 (2011)

Articles
• A. Alexandra, ’11 Ways to Reduce Hiring Bias’ (Harver, 2019)
https://harver.com/blog/reduce-hiring-bias/
• Catherine Capozzi, ‘The Importance of Employment & Workplace in the
Society’ (Bizfluent, 25 January 2019) https://bizfluent.com/info-8296076-
importance-employment-workplace-society.html
• Drew Harwell, ‘Rights group files federal complaint against HireVue, a hiring
company that uses artificial intelligence’ (The Seattle Times, 6 November 2019)
https://www.seattletimes.com/business/rights-group-files-federal-
complaint-against-ai-hiring-firm-hirevue-citing-unfair-and-deceptive-
practices/
• Eleni Vasilaki, ‘Worried about AI taking over the World? You may be making
some rather unscientific assumptions’ (The Conversation, 24 September 2018)
http://theconversation.com/worried-about-ai-taking-over-the-world-you-
may-be-making-some-rather-unscientific-assumptions-103561
• Eric Brynjolfsson and Andrew McAfee, ‘The Big Boom is the Innovation Story
of our Time’ (The Atlantic Business, 21 November 2011)
https://theatlantic.com/business/archive/2011/11/the-big-data-boom-is-the-i
nnovation-story-of-our-time/248215/
• Frida Polli, ‘Using AI to Eliminate Bias from Hiring’
https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring

43
• Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias
against women’ (Reuters, 10 October 2018)
https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-
women-idUSKCN1MK08G
• Joy Buolamwini, Gender Shades: Intersectional Phenotypic and Demographic
Evaluation of Face Datasets and Gender Classifiers, MIT (2017),
https://www.media.mit.edu/publications/full-gender-shades-thesis-17/
• Julien Lauret, ‘Amazon’s sexist AI recruiting tool: how did it go so wrong?’
(Becoming Human, 16 August 2019) https://becominghuman.ai/amazons-
sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e
• Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias, Harvard
Business Review (May 6, 2019), https://hbr.org/2019/05/all-the-ways-hiring-
algorithms-can-introduce-bias
• Sam Levin, ‘New Artificial Intelligence can tell whether you’re Gay or Straight
from a photograph’ (The Guardian, 8 September 2017)
https://www.theguardian.com/technology/2017/sep/07/new-artificial-
intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph
• The Washington Post, A face-scanning algorithm increasingly decides
whether you deserve the job (November 6, 2019),
https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-
scanning-algorithm-increasingly-decides-whether-you-deserve-job/
• Venturi Group, ‘How AI is Changing Recruitment’ (Venturi Group, not dated),
https://www.venturi-group.com/how-ai-is-changing-recruitment/

EU and other authoritative Institutions


• Article 29 Working Party Guidelines on Automated Decision-Making and
Profiling.
• Electronic Privacy Information Center (EPIC), EPIC Files Complaint with FTC
about Employment Screening Firm HireVue (November 6, 2019),
https://epic.org/2019/11/epic-files-complaint-with-ftc.html
• European Commission, “WHITE PAPER On Artificial Intelligence – a
European approach to excellence and trust” (European Commission, 19

44
February 2020) https://ec.europa.eu/info/sites/info/files/commission-white-
paper-artificial-intelligence-feb2020_en.pdf
• European Union Agency for Fundamental Rights and Council of Europe,
Handbook on European data protection law, 2018 edition.
• Federal Trade Commission (FTC), Complaint and Request for Investigation,
Injunction, and Other Relief (November 6, 2019),
https://epic.org/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf

45

You might also like