0% found this document useful (0 votes)
14 views17 pages

Diversity Bias in Artificial Intelligence: See Discussions, Stats, and Author Profiles For This Publication at

The chapter discusses diversity bias in artificial intelligence (AI), highlighting how AI systems can perpetuate discrimination based on attributes such as race and gender. It explores the interconnectedness of diversity and AI, providing examples of biased AI in various sectors like hiring and healthcare, and emphasizes the importance of intersectionality in understanding these biases. Solutions to mitigate AI bias include ethical foundations, education, and diversifying AI development teams to create more equitable algorithms.

Uploaded by

mertaydogdu1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views17 pages

Diversity Bias in Artificial Intelligence: See Discussions, Stats, and Author Profiles For This Publication at

The chapter discusses diversity bias in artificial intelligence (AI), highlighting how AI systems can perpetuate discrimination based on attributes such as race and gender. It explores the interconnectedness of diversity and AI, providing examples of biased AI in various sectors like hiring and healthcare, and emphasizes the importance of intersectionality in understanding these biases. Solutions to mitigate AI bias include ethical foundations, education, and diversifying AI development teams to create more equitable algorithms.

Uploaded by

mertaydogdu1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/380552000

Diversity bias in artificial intelligence

Chapter · May 2024


DOI: 10.4324/9781003383741-23

CITATIONS READS

2 590

3 authors, including:

Eva Gengler Ilse Hagerer


Friedrich-Alexander-University Erlangen-Nürnberg Technical University of Munich
10 PUBLICATIONS 6 CITATIONS 15 PUBLICATIONS 59 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Ilse Hagerer on 31 May 2024.

The user has requested enhancement of the downloaded file.


Diversity bias in artificial intelligence

Authors: Eva Gengler, Ilse Hagerer, Alina Gales

Eva Gengler
Eva is a researcher at the Friedrich-Alexander-Universität Erlangen-Nürnberg with focus on AI,
power, and feminism as well as an entrepreneur, board member, and voice for feminist AI.

Dr. Ilse Hagerer


Ilse is a researcher at the Technical University of Munich, Germany. Her research focuses on higher
education organisation, diversity, management, and digitalization.

Dr. Alina Gales


Alina works as diversity manager at the Technical University of Munich. She is a speaker on the
intersections of diversity, discrimination, and digitalization. In her research, she focuses on the mutual
influence of technology, society, and gender.

FINAL WORKING VERSION


Published in: Gengler, E., Hagerer, I., & Gales, A. (2024). Diversity bias in artificial
intelligence. In J. Passmore, S. Diller, S. Isaacson, M. Brantl (Eds.), The Digital and AI
Coaches' Handbook (pp. 229-240). Routledge, London.

1
Introduction
The ever-increasing digital transformation causes profound changes in many areas of life,
especially through emerging disruptive technologies such as artificial intelligence (AI). AI
tools like Chat GPT generate text, and prompt discussions on copyright, ethics, and human
uniqueness, – and can be a major source for many applications in business and private context,
including coachbots. As algorithms within AI tools can practice discrimination, emerging
coachbots powered by AI can be discriminatory. In the following chapter, diversity as a concept
as well as its interconnectedness with AI is explained. Then, many examples of biased AI in
different business sectors are presented. Next to providing solutions to AI bias, the
consequences of discriminatory AI for people in everyday life are laid out. The goal of this
chapter is to illuminate the problems that have emerged due to a lack of diversity in AI and to
show what solutions exist to address them. When coachingbots replace human coaches – as
examples from other sectors demonstrate – , the effects can be tremendous. The questions
raised, the answers given, and the objectives pursued today will impact the decisions of
upcoming decades.

Diversity as a concept
Diversity as a concept emerged in the 1980s to capture the dimensions of differences in society
like gender, race, or sexual identity. In recent years, this concept has evolved to include
numerous other attributes, such as social background, ability, and religion. While previously
the focus of the term diversity lay on aspects emphasising social divisions, recently it
concentrates on the positive aspects of difference. Consequently, the focus shifted from a
narrow to a more pluralistic and diverse worldview also incorporating the powerful feminist
concept of intersectionality (Ahonen, 2015).

Intersectionality. Originating in Black feminism, the term intersectionality grasps the


complex nature of patriarchy in our societies. It incorporates the variety of diverse attributes
that might disadvantage people in society, e.g., gender, age, religion and belief, disability,
sexual identity, ethnicity, as well as appearance (Edwards et al., 2021), and focuses on their
overlapping. While diversity sheds light on different attributes used to distinguish groups,
intersectionality incorporates the fact that people with several disadvantaged characteristics are
more strongly discriminated against than those with merely one attribute and even those with
the sum of both single disadvantaged attributes (Crenshaw, 1989). For instance, a woman might

2
be disadvantaged when applying for a leadership position and so might Black applicants. Black
women, consequently, incorporate both disadvantaged attributes and thus, face much worse
disadvantages than either group individually.

Diversity in legislation. Diversity is not only a concept but is also valued and protected
within international and national legislation. In these frameworks, the focus lies on protecting
diversity and prohibiting discrimination based on diversity attributes. Among others, the
Universal Declaration of Human Rights grants all human beings equality, dignity, and freedom
“[…] without distinction of any kind, such as race, colour, sex, language, religion, political or
other opinions, national or social origin, property, birth or another status.” (Unnited Nations,
1948, Article 2). Also, national legislation includes non-discrimination regarding diversity
characteristics, e.g., the German General Act on Equal Treatment. In recent years, increasing
numbers of judicial decisions focus on non-discrimination of diversity attributes. Yet, not all
characteristics of diversity are protected by law.

Diversity and its interconnectedness with AI


Numerous cases of biased AI have been discovered, e.g., in hiring (Jago & Laurin, 2022), face
recognition (Buolamwini & Gebru, 2018), and healthcare (Hamberg, 2008). A bias is a
systematic disadvantage, which is described as the unequal treatment of individuals from a
particular group who do not differ from individuals in other groups in a way that justifies such
disadvantages. Two prominent AI-facilitated biases based on recent research in this area will
be shown below: AI-facilitated racial and gender bias. Nevertheless, multiple other
characteristics like sexual identity, age, class, and religion also lead to discrimination by AI,
yet these are as of now less covered by research. Furthermore, AI systems have been found to
discriminate especially strongly against people with more than one disadvantaged attribute
(e.g., Buolamwini & Gebru, 2018). Therefore, intersectionality is especially important when
focusing on biased AI.

AI-facilitated racial bias. A frequently mentioned bias related to AI is racial bias. The
term race can be associated with skin colour and varying experiences of discrimination. It is
often distinguished by “darker” (“Black”) and “lighter“ (“White”) skin colour (Burlina et al.,
2021), which is used, for instance, to examine the results of face recognition algorithms (Shi et
al., 2020) or medical diagnostics: If a person goes to a doctor suffering from vision loss due to
retinal problem, the person may go through an automated diagnostics procedure. In this
process, AI is used to test for diabetic retinopathy and to assist medical professionals by

3
interpreting image scans of the retina. Because the presumed skin pigmentation relates, on
average, to the concentration of melanin, and subsequently retinal colouration, the performance
of AI diagnostics algorithms may be less accurate and therefore result in bias to the
disadvantage of individuals of diverse race (Burlina et al., 2021). An equitable AI diagnostic
system should assign different ethnic groups with the same probability of having diabetic
retinopathy (Burlina et al., 2021). Similar biases are confirmed in numerous studies: e.g.,
Obermeyer et al. (2019) found that a healthcare algorithm, which is applied to roughly 200
million people in the U.S. every year, reduced the number of Black people receiving additional
healthcare treatment by more than 50%, even though they had the same chronic illnesses as
White people. Furthermore, racial bias was found in face recognition algorithms (Shi et al.,
2020) or applications assessing job applicants (Köchling et al., 2021).

AI-facilitated gender bias. Discrimination based on gender is prevailing in many AI


systems. Most AI is developed by men – this lack of diversity can reflect discriminatory values
in the tools (Leavy, 2018). Oftentimes, certain groups are underrepresented in the datasets that
are used to train algorithms, partly due to historical discrepancies – the difference between
White men and Black women is particularly apparent. Identifying the underrepresented group
is challenging for AI. Underrepresentation in the data is the main reason for biased decision-
making based on evaluating faces. Hence, face recognition systems provide better results for
men than women (Singh et al., 2020; Smith & Ricanek, 2020). The resulting biases may be
problematic when a person is misidentified, for example, when tagging photos on social media,
unlocking mobile devices, searching for missing persons through security cameras, or in law
enforcement.

Reasons for biased AI systems. As touched upon above, mainly three aspects contribute to
biased AI systems. First, people are inherently biased and have a tendency to use stereotypes
to inform decision making. Their worldviews shape the way AI is programmed and the context
in which it is used. Second, the training data often incorporates these biases, thus influencing
the AI to make sexist, misogynistic, classicist, racist, and ableist decisions. Biases may also
lead to incomplete datasets: For instance, the gender data gap – namely the missing data on
women, trans, and genderqueer people – is a well-known problem in a wealth of domains
(Criado-Perez, 2020). When data is missing, AI cannot learn from it and thus, its output might
disadvantage those who do not fit the “norm”. Third, there is an overrepresentation of White
men in decision-making positions in the field of AI (Nuseir et al., 2021). On the one side, this
includes the developers of AI who make technical decisions on aspects to include and to omit.

4
On the other side: the people deciding upon budgets, the domains in which AI is to be created,
and whom to employ. This lack of diversity in the field leads to questions remaining untackled
and system misfunctioning undetected. With the emergence of AI, systems are created that
build on data, logic, and power relations from the past. Biased AI, when in use, can cause
discrimination and thus, become a discriminatory AI system.

Examples of biased AI in different business sectors


Commonly studied business sectors that are affected by bias-sensitive AI are courts, lending,
hiring, college admissions, and face recognition. Other examples can be found in the healthcare
system and diagnostics, online advertisement, search engines, or text processing. Subsequently,
we shed light on these business sectors.

Hiring. A prominent example of bias in hiring is Amazon’s CV screening algorithm.


Its development and use were stopped after noticing that women were systematically rejected
(Jago & Laurin, 2022). Moreover, unequal representations in the training data led to an
unequitable likelihood of getting an invitation to job interviews for different ethnicities (Asians
were preferred over Caucasians, and Caucasians over African Americans) and for men
(Köchling et al., 2021). Unequal treatment by recruitment algorithms could even be observed
for applicants from specific universities. When the algorithm observed that a required
combination of skills was often obtained from a specific university, a candidate could be
labeled suitable for the position simply because they graduated from that university (Gu &
Oelke, 2019). An algorithm to help marketers hire appropriate influencers for their campaigns
rated influencers lower when their sexuality was described as LGBTQ+, their gender was
female, or they were People of Colour and additionally when they checked intersectional
overlapping of those attributes (Bishop, 2021). Automated data processing in hiring increases
the chances of discrimination towards minoritised groups even if an HR manager makes the
ultimate decision (Todolí-Signes, 2019). Bias-caused inequalities in society are similarly
reinforced by AI also in higher education: When used to select study applicants based on
college admission test scores, individuals can be affected by unjustifiable low scores, which
reduce the probability of being accepted to a highly regarded college and consequently, for a
high-skilled job position. Also, applicants’ postcodes and gender-specific hobbies may serve
as proxy attributes and cause indirect discrimination.

Face recognition. There were recent advances in machine learning with face
recognition algorithms, but their performance is highly biased: They perform better on males

5
than females and have difficulties identifying children and elderly (Buolamwini & Gebru,
2018; Smith & Ricanek, 2020). Particularly ethnicity, skin colour, and facial shape affected the
accuracy of face recognition systems (Serna et al., 2019). E.g., Google Photos identified Black
people as gorillas (Snow, 2018) and Nikon’s camera software Asians as constantly blinking
(Rose, 2010).

Healthcare system and diagnostics. Clinical trials mainly exist for men and less data
is available for women resulting in biases against women in the diagnosis and treatment of
diseases (Hamberg, 2008). Research in the field of gender medicine highlights the differences
between women and men in diagnosis and treatment as essential to achieving equity in
healthcare across genders (Baggio et al., 2013). As heart diseases in the past were seen as a
predominantly male problem, less attention was paid to the symptoms of women, resulting in
unbalanced datasets and replicated biases (Paviglianiti & Pasero, 2020). Racial minorities
suffer from inferior access to treatment in medical care due to unconscious bias in medical
decision-making (Chapman et al., 2013; Vartan, 2019).

Word embeddings. Individual words with similar meanings are created by capturing
semantic relations of words in large text corpora. Online sources like Wikipedia or Google
News provide such training data. Word embeddings show how gender ideology inherent in
language can lead to gender-biased systems: Bias can be incorporated in different linguistic
features like stereotypical descriptions (e.g., if the word embedding for man is doctor, but for
woman it is nurse), as well as the listing of the male first, or the underrepresentation of women
in texts (Leavy, 2018).

Coaching. To date, there has been little research into possible biases in the multiple
emerging AI coachbots. Unlike other sectors, many AI technology coaching start-ups are led
by women. However, given these bots usually access large language models such as ChatGPT,
they are likely to suffer similar biases as is the case with other industries. Thus, developers and
users are encouraged to be aware of possible biases ingrained in choachbots as these systems
gain wider traction in both organisations and by individuals (Passmore & Tee, 2023).

Solutions to AI bias
Solutions to reduce discrimination by AI lie in the whole AI lifecycle: Creating equitable
algorithms requires ethical fundaments, education, and diversity of AI experts; furthermore,
bias mitigation techniques help to properly adapt algorithms, and finally, more objective

6
decision-making as well as corporate AI governance need to be ensured. Thus, a holistic
perspective is needed when aiming to resolve existing power imbalances and biases in AI.

Ethical fundaments. When it comes to AI, context matters. This becomes apparent
when looking at different use cases of originally very similar AI systems. Computer vision, for
instance, might be used for bird protection in wind turbines. Misinterpretation in this instance
has far different consequences than the use of computer vision in border control scenarios,
where vulnerable people such as refugees might be erroneously targeted by border protection
mechanisms. AI, as any other tool, is not inherently good or bad. It may be used in evil or
virtuous ways and for evil or virtuous reasons. Therefore, it is fundamentally important to
consider the context in which AI is designed and trained as well as the context in which it is
(supposed to be) used. Thus, the objectives behind the use of AI and the objectives implanted
in these systems should be transparent and verifiable. As a vital step towards this objective,
ethical fundaments for AI development are required.

Educating and diversifying AI experts. Ethical fundaments can be built by better


education, diverse teams with interdisciplinary perspectives, and a common understanding of
equity: While the technology itself is inherently unbiased, humans are inherently biased due to
their cultural and social background. Education and equality valuing practices could prevent
subconscious discrimination (Li, 2021). Therefore, it is important to teach professionals and
students in the AI field the ethical, behavioural, and social dimensions of algorithmic bias. A
broader perspective can further be achieved through diverse AI development teams. They help
mitigate biases in algorithms because the algorithms’ behaviour likely reflects the lack of, e.g.,
gender diversity (Fahse & Huber, 2021). As most developers in AI are men – according to the
World Economic Forum, women make up an estimated 26% of workers in AI roles worldwide
(2019) – advancing the careers of women is critical to avoid rolling back progress toward
gender equity (Nuseir et al., 2021). Also, collaboration with experts from various disciplines
like social sciences, law, and humanities would allow for a broader learning spectrum on the
background of training data (Dolata et al., 2022). Including ethicists, legal experts, data
scientists, and others in the development process ensures a higher level of equity (Hauer et al.,
2021). Besides, the AI development team should define equity goals and build a mutual
definition and criteria of ethics to ensure a good understanding of how the outcomes will affect
users (Rodolfa et al., 2021; Zhang et al., 2019).

7
Ethical data and algorithms. To reduce bias, it is not only important to enhance the
algorithms themselves but also to improve underlying datasets by using unbiased sources and
integrating fairness evaluations. Data science teams ought to reflect the population for which
the algorithm is designed and include, e.g., outliers and diverse groups or visual analytics tools
to discover intersectional bias (Fahse & Huber, 2021). Enhanced algorithms have bias concerns
directly embedded in their systems. Additional recommender systems automatically select the
algorithm with the maximum accuracy and diversity trade-off (Gutowski et al., 2021). Before
the go-live and in the application of AI , fairness evaluations and audit processes ought to be
applied to check for biases (Bryant & Howard, 2019). Since AI learns over time, it requires
continuous assessments to ensure fair outputs (Parikh et al., 2019).

Bias mitigation techniques. Bias mitigation techniques for AI applications within


discriminatory outcomes that have already been observed are mostly applied retrospectively.
Based on the point of intervention, there are three groups of mitigation algorithms leading to
more equitable outcomes, which can be used singularly or in combination: Pre-processing
algorithms modify the training dataset to learn new data representations (Puyol-Antón et al.,
2021). Hereby, algorithms change the distribution of the sample points by up-weighting
underrepresented groups (“reweighing”), changing formats or labels, replacing missing values,
or filtering out bias-related data by removing biased information about protected groups
(“massaging”) (Fahse & Huber, 2021). In-processing algorithms modify AI during the training
process to remove bias from the original predictions (Mehrabi et al., 2021): They hide or
classify discriminatory information and modify the algorithm so that it can no longer predict
sensitive attributes. This way, bias can be removed by over 14%. Post-processing modifies
biased prediction results (Biswas & Rajan, 2020), e.g., by choosing preferred outcomes for
uncertain predictions in favour of the unprivileged group, changing the output labels, or
calibrating scores from optimised classifiers. Evaluation based on the example of a job
application algorithm showed an increase in accuracy of over 13% (Harris, 2020).

More objective and fairer decision-making. By transparency and human checks, to


reduce bias in AI, fairer decision-making can be obtained: The concept of explainable AI
describes the factors involved in the hidden processes in the black box (Sen et al., 2020).
Developers ought to document all steps from data collection to decision-making to visualise
the dynamics within the continuously adapting AI system and to avoid unwanted changes over
time (Mitchell et al., 2021). AI users should obtain detailed and transparent information about
how the algorithm works to become aware of biases in the system (Köchling et al., 2021).

8
However, often this is not feasible because of the numerous steps and inputs the algorithm uses.
Furthermore, algorithmic outcomes require human supervision and trust in their maintenance
(Fahse & Huber, 2021). Likewise, Bîgu and Cernea (2019) recommend recruiters to use AI-
based hiring software only for support and not for final decision-making. The combination of
algorithmic and human controls has the potential to avoid bias (Wiens et al., 2020). In the
workplace, data protection ought to be regulated collectively with employee representatives or
with policies that incentivise firms to be aware of protected groups (Fu et al., 2022).

AI governance. Further, corporate AI governance plays an important role on the path


towards ethical and equitable AI, as it defines boundaries to which the decision-making on and
development of AI must comply with. Companies increasingly establish ethics committees for
decision-making on AI use cases and areas of application (e.g., as in IBM), as well as guidelines
for ethical, trustworthy, and responsible AI (e.g., by Google, IBM, Microsoft). They show that
AI governance gradually has become a focus of attention in businesses (Mäntymäki et al.,
2022), among others, because of the steps taken towards a European legislation on AI.
However, contemporary corporate frameworks often do not go far enough, neglecting aspects
such as diversity in teams, data, and context. To have a far-reaching impact, corporate AI
governance needs to incorporate all the aspects mentioned above. Beyond the individual and
organisational level, neither privacy regulations like the European General Data Protection
Regulation nor anti-discrimination laws can solve the problems with biased algorithms on their
own (Mann & Matzner, 2019). Additional government interventions, which are flexible and
adaptable to frequent technological changes, could help to introduce transparent data collection
processes, algorithms, and regulatory frameworks to monitor biases (Li, 2021; Nyarko et al.,
2021).

Consequences of biased AI for people in everyday life


All the examples previously given show the multifarious ways in which AI, its applications,
and use cases can have discriminatory influences. Even though it might be self-evident to agree
that it is morally unacceptable if someone is discriminated against, one could assume that
anything AI-related appears to be out of reach for the average tech-consuming person.
However, the opposite is the case: with AI increasingly becoming integrated in other products
and services, anyone can be affected by discriminatory decision-making facilitated by AI.
Subsequently, the practical implications of implementing algorithms for decision-making in
the realms of multiple use cases to raise awareness on how people can be affected without
noticing are highlighted.

9
Power imbalances and biased norms perpetuated through AI. Having read the
numerous examples presented above, the question might remain how discriminatory AI is
affecting people in their everyday life, within their casual social media behaviour. This
concluding paragraph is supposed to support a reflective mindset in one’s own usage of AI-
induced products and to raise awareness of such usage among children and teenagers as
vulnerable groups.

Biases in both people and AI. One of the quickest perceptions of someone else is their
looks. What is seen repeatedly, is what is perceived as the given or at least it is something
people are not bothered by. If the people surrounding me look like me, I probably do not find
it noticeable. At the same time, people are also influenced by what they see, not only in direct
interaction, but also in the media: in the films and the series watched, in the magazines read,
and of course in what is consumed online, on the Internet and on social media. Positions of
power in the media have historically been male dominated: a small number of White
heterosexual men decided what is appropriate for consumers to watch on TV, in cinemas, and
to read and view in print media. In the age of social media, everyone with a smartphone can be
a consumer and content producer. However, if the algorithm - dictating what is shown on the
feeds people consume within social media apps - is primarily trained with data depicting a
certain image, people are influenced in their perception of others. If the algorithm pushes
photos of people with a certain body type for more visibility on a social media app, it can
reproduce overtly sexualised and pressure-inducing images of women. What if someone has
fun expressing themselves on social media but it is harder for them to get views because an
algorithm prefers to give visibility to photos of primarily thin White models with unnatural
facial features?

AI intersecting with other diversity dimensions. The more someone represents a


privileged position, the more someone gets pushed by the algorithm for more views, more likes,
and potentially, more monetary resources. As listed in the many examples above, the individual
consequences of the power dynamics people are exposed to can be decisive and excruciating:
if a person gets evaluated unequally on whether they could recommit a crime, influencing
probation, and incarceration time; if a person cannot receive the same amount of money lent
due to reasons that should not affect credibility; if a person does not have the same opportunities
when applying for a job or even being able to see a job advertisement online; if a person gets
wrongly identified by surveillance cameras because of flawed face recognition and has to go
through an interrogation; if a person gets falsely (or not at all) diagnosed with a disease and

10
receives the wrong (or no) medical treatment – and all of that and more because of algorithms
deciding favourably for the already privileged and unfavourably for the previously
underprivileged.

AI as a powerful tool. All these implications show how influential and transformative
the institutions and individuals who design, develop, decide, and use AI are – and that more
diversity in all these areas influencing algorithmic decision-making is required. Technology
embodies society. With the state AI is in now, you could picture it as people from a society
standing in a room with many doors representing gateways to opportunities, chances, access to
resources, etc. But the amount of doors you see and could open to fulfill your individual interest
and needs depends on aspects that should not divide a society but should be treated equally.
These include different diversity dimensions relating to gender, race, ethnicity, age,
socioeconomic status, geographical location, religion, disability, sexual identity, language, and
more. Otherwise, AI embodies and perpetuates the prejudices and injustices of “real life”, in
policies, the professional environment, and private lives.

Chances for diversity through AI. How could AI be a driver for diversity, justice, and
equity? One possibility are people with influence in the AI realm who can bring more diversity
into AI teams and have AI be trained with unbiased datasets. Another option are bias mitigation
techniques and a fair decision-making process. Such aspects being incorporated in AI
governance plays a strong role, too. Intersectional and inclusive feminism is a perspective that
includes all of these aspects and makes it feasible to tackle the sources of discriminatory AI
systems. It can shape AI in a way so that AI does not only omit biases, but promotes diversity,
makes power imbalances and biases visible, thrives business value, and is used in cases that
fulfill Human Rights and Sustainable Development Goals.

Questions for reflection


Q1: What are the consequences of neglecting diversity in decision-making based on AI?

Q2: How can diversity be integrated as an underlying concept in AI?

Q3: What can each and every one of us do to make AI more diverse and equitable?

Conclusion
The interrelatedness of AI and diversity is manifold and complex. Too less diversity in society,
datasets, and decision-making are reflected by AI systems that disadvantage those with diverse
attributes. This is evident in the many instances of biased AI in a broad range of business

11
sectors. Non-diversity in AI development teams and decision-makers results in the exclusion
of relevant voices and overlooked erroneous system behaviours. Thus, prevailing power
structures and inequalities as well as biases are perpetuated and even intensified by AI, possibly
resulting in a backlash for diversity in society, datasets, and decision-making. This vicious
circle needs to be broken by embracing diversity as a chance and incorporating ethical
fundaments, datasets, and algorithms with techniques to mitigate bias, and more objective and
fairer decision-making. If successful, AI can become a driver for diversity transforming our
world into a more equitable place for all.

12
References

Ahonen, P. (2015). Ethico-politics of diversity and its production. The Routledge Companion to
Ethics, Politics and Organizations.
https://www.academia.edu/12893448/Ethico_politics_of_diversity_and_its_production
Baggio, G., Corsini, A., Floreani, A., Giannini, S., & Zagonel, V. (2013). Gender medicine: A task for
the third millennium. Clinical Chemistry and Laboratory Medicine (CCLM), 51(4), 713–727.
https://doi.org/10.1515/cclm-2012-0849
Bertrand, J., & Weill, L. (2021). Do algorithms discriminate against African Americans in lending?
Economic Modelling, 104, 105619. https://doi.org/10.1016/j.econmod.2021.105619
Bîgu, D., & Cernea, M.-V. (2019). ALGORITHMIC BIAS IN CURRENT HIRING PRACTICES:
AN ETHICAL EXAMINATION. PROCEEDINGS OF THE 13th INTERNATIONAL
MANAGEMENT CONFERENCE, 1068–1073.
Bishop, S. (2021). Influencer Management Tools: Algorithmic Cultures, Brand Safety, and Bias.
Social Media + Society, 7(1), 205630512110030. https://doi.org/10.1177/20563051211003066
Biswas, S., & Rajan, H. (2020). Do the machine learning models on a crowd sourced platform exhibit
bias? An empirical study on model fairness. Proceedings of the 28th ACM Joint Meeting on European
Software Engineering Conference and Symposium on the Foundations of Software Engineering, 642–
653. https://doi.org/10.1145/3368089.3409704
Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of
Experimental Psychology: Applied, 27(2), 447–459. https://doi.org/10.1037/xap0000294
Bryant, D., & Howard, A. (2019). A Comparative Analysis of Emotion-Detecting AI Systems with
Respect to Algorithm Performance and Dataset Diversity. Proceedings of the 2019 AAAI/ACM
Conference on AI, Ethics, and Society, 377–382. https://doi.org/10.1145/3306618.3314284
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability
and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
Burlina, P., Joshi, N., Paul, W., Pacheco, K. D., & Bressler, N. M. (2021). Addressing Artificial
Intelligence Bias in Retinal Diagnostics. Translational Vision Science & Technology, 10(2), 13.
https://doi.org/10.1167/tvst.10.2.13
Chapman, E. N., Kaatz, A., & Carnes, M. (2013). Physicians and Implicit Bias: How Doctors May
Unwittingly Perpetuate Health Care Disparities. Journal of General Internal Medicine, 28(11), 1504–
1510. https://doi.org/10.1007/s11606-013-2441-1
Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of
Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal
Forum, Vol. 1989(1
Criado-Perez, C. (2020). Invisible women: Exposing data bias in a world designed for men. Vintage.
Dolata, M., Feuerriegel, S., & Schwabe, G. (2022). A sociotechnical view of algorithmic fairness.
Information Systems Journal, 32(4), 754–818.

13
Edwards, J., Clark, L., & Perrone, A. (2021). LGBTQ-AI? Exploring Expressions of Gender and
Sexual Orientation in Chatbots, UI '21: Proceedings of the 3rd Conference on Conversational User
Interfaces. https://doi.org/10.1111/isj.12370
Fahse, T., & Huber, V. (2021). Managing Bias in Machine Learning Projects. In F. Ahlemann, R.
Schütte, & S. Stieglitz (Eds.), Innovation Through Information Systems (Vol. 7, pp. 94–109). Springer
International Publishing.
https://aisel.aisnet.org/wi2021/RDataScience/Track09/7?utm_source=aisel.aisnet.org%2Fwi2021%2F
RDataScience%2FTrack09%2F7&utm_medium=PDF&utm_campaign=PDFCoverPages
Fu, R., Aseri, M., Singh, P. V., & Srinivasan, K. (2022). “Un”Fair Machine Learning Algorithms.
Management Science, 68(6), 4173–4195. https://doi.org/10.1287/mnsc.2021.4065
Gu, J., & Oelke, D. (2019). Understanding Bias in Machine Learning (arXiv:1909.01866). arXiv.
http://arxiv.org/abs/1909.01866
Gutowski, N., Amghar, T., Camp, O., & Chhel, F. (2021). Gorthaur-EXP3: Bandit-based selection
from a portfolio of recommendation algorithms balancing the accuracy-diversity dilemma.
Information Sciences, 546, 378–396. https://doi.org/10.1016/j.ins.2020.08.106
Hamberg, K. (2008). Gender Bias in Medicine. Women’s Health, 4(3), 237–243.
https://doi.org/10.2217/17455057.4.3.237
Harris, C. G. (2020). Mitigating Cognitive Biases in Machine Learning Algorithms for Decision
Making. Companion Proceedings of the Web Conference 2020, 775–781.
https://doi.org/10.1145/3366424.3383562
Hauer, M. P., Adler, R., & Zweig, K. (2021). Assuring Fairness of Algorithmic Decision Making.
2021 IEEE International Conference on Software Testing, Verification and Validation Workshops
(ICSTW), 110–113. https://doi.org/10.1109/ICSTW52544.2021.00029
Jago, A. S., & Laurin, K. (2022). Assumptions About Algorithms’ Capacity for Discrimination.
Personality and Social Psychology Bulletin, 48(4), 582–595.
https://doi.org/10.1177/01461672211016187
Köchling, A., Riazy, S., Wehner, M. C., & Simbeck, K. (2021). Highly Accurate, But Still
Discriminatory: A Fairness Evaluation of Algorithmic Video Analysis in the Recruitment Context.
Business & Information Systems Engineering, 63(1), 39–54. https://doi.org/10.1007/s12599-020-
00673-w
Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in
machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software
Engineering, 14–16. https://doi.org/10.1145/3195570.3195580
Li, X. (2021). Analysis of Racial Discrimination in Artificial Intelligence from the Perspective of
Social Media, Search Engines, and Future Crime Prediction Systems: 6th International Conference on
Contemporary Education, Social Sciences and Humanities. (Philosophy of Being Human as the Core
of Interdisciplinary Research) (ICCESSH 2021), China. https://doi.org/10.2991/assehr.k.210902.029
Mann, M., & Matzner, T. (2019). Challenging algorithmic profiling: The limits of data protection and
anti-discrimination in responding to emergent discrimination. Big Data & Society, 6(2),
205395171989580. https://doi.org/10.1177/2053951719895805
Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Defining organizational AI
governance. AI and Ethics. https://doi.org/10.1007/s43681-022-00143-x

14
Mehrabi, N., Naveed, M., Morstatter, F., & Galstyan, A. (2021). Exacerbating Algorithmic Bias
through Fairness Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10),
8930–8938. https://doi.org/10.1609/aaai.v35i10.17080
Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2021). Algorithmic Fairness: Choices,
Assumptions, and Definitions. Annual Review of Statistics and Its Application, 8(1), 141–163.
https://doi.org/10.1146/annurev-statistics-042720-125902
Nuseir, M. T., Al Kurdi, B. H., Alshurideh, M. T., & Alzoubi, H. M. (2021). Gender Discrimination at
Workplace: Do Artificial Intelligence (AI) and Machine Learning (ML) Have Opinions About It. In
A. E. Hassanien, A. Haqiq, P. J. Tonellato, L. Bellatreche, S. Goundar, A. T. Azar, E. Sabir, & D.
Bouzidi (Eds.), Proceedings of the International Conference on Artificial Intelligence and Computer
Vision (AICV2021) (pp. 301–316). Springer International Publishing. https://doi.org/10.1007/978-3-
030-76346-6_28
Nyarko, J., Goel, S., & Sommers, R. (2021). Breaking Taboos in Fair Machine Learning: An
Experimental Study. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–11.
https://doi.org/10.1145/3465416.3483291
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an
algorithm used to manage the health of populations. Science, 366(6464), 447–453.
https://doi.org/10.1126/science.aax2342
Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health
Care. JAMA, 322(24), 2377. https://doi.org/10.1001/jama.2019.18058
Passmore, J., & Tee, D. (2023). Can Chatbots replace human coaches? Issues and dilemmas for the
coaching profession, coaching clients and for organisations. The Coaching Psychologist, 19(1), 47–
54. https://doi.org/10.53841/bpstcp.2023.19.1.47
Paviglianiti, A., & Pasero, E. (2020). VITAL-ECG: A de-bias algorithm embedded in a gender-
immune device. 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, 314–318.
https://doi.org/10.1109/MetroInd4.0IoT48571.2020.9138291
Puyol-Antón, E., Ruijsink, B., Piechnik, S. K., Neubauer, S., Petersen, S. E., Razavi, R., & King, A.
P. (2021). Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance
in Deep Learning Based Segmentation. In M. de Bruijne, P. C. Cattin, S. Cotin, N. Padoy, S. Speidel,
Y. Zheng, & C. Essert (Eds.), Medical Image Computing and Computer Assisted Intervention –
MICCAI 2021 (Vol. 12903, pp. 413–423). Springer International Publishing.
https://doi.org/10.1007/978-3-030-87199-4_39
Rodolfa, K. T., Lamba, H., & Ghani, R. (2021). Empirical observation of negligible fairness–accuracy
trade-offs in machine learning for public policy. Nature Machine Intelligence, 3(10), 896–904.
https://doi.org/10.1038/s42256-021-00396-x
Rose, A. (2010). Are face-detection cameras racist?
https://content.time.com/time/business/article/0,8599,1954643,00.html
Sen, S., Dasgupta, D., & Gupta, K. D. (2020). An Empirical Study on Algorithmic Bias. 2020 IEEE
44th Annual Computers, Software, and Applications Conference (COMPSAC), 1189–1194.
https://doi.org/10.1109/COMPSAC48688.2020.00-95
Serna, I., Morales, A., Fierrez, J., Cebrian, M., Obradovich, N., & Rahwan, I. (2019). Algorithmic
Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics
(arXiv:1912.01842). arXiv. http://arxiv.org/abs/1912.01842

15
Shi, S., Wei, S., Shi, Z., Du, Y., Fan, W., Fan, J., Conyers, Y., & Xu, F. (2020). Algorithm Bias
Detection and Mitigation in Lenovo Face Recognition Engine. In X. Zhu, M. Zhang, Y. Hong, & R.
He (Eds.), Natural Language Processing and Chinese Computing (Vol. 12431, pp. 442–453).
Springer International Publishing. https://doi.org/10.1007/978-3-030-60457-8_36
Singh, R., Agarwal, A., Singh, M., Nagpal, S., & Vatsa, M. (2020). On the Robustness of Face
Recognition Algorithms Against Attacks and Bias. Proceedings of the AAAI Conference on Artificial
Intelligence, 34(09), 13583–13589. https://doi.org/10.1609/aaai.v34i09.7085
Smith, P., & Ricanek, K. (2020). Mitigating Algorithmic Bias: Evolving an Augmentation Policy that
is Non-Biasing. 2020 IEEE Winter Applications of Computer Vision Workshops (WACVW), 90–97.
https://doi.org/10.1109/WACVW50321.2020.9096905
Snow, J. (2018). Google Photos Still Has a Problem with Gorillas. MIT Technology Review.
https://www.technologyreview.com/2018/01/11/146257/google-photos-still-has-a-problem-with-
gorillas/
Todolí-Signes, A. (2019). Algorithms, artificial intelligence and automated decisions concerning
workers and the risks of discrimination: The necessary collective governance of data protection.
Transfer: European Review of Labour and Research, 25(4), 465–481.
https://doi.org/10.1177/1024258919876416
Unnited Nations. (1948, December 10). Universal Declaration of Human Rights. United Nations;
United Nations. https://www.un.org/en/about-us/universal-declaration-of-human-rights
Vartan, Starre. (2019, October 24). Racial Bias Found in a Major Health Care Risk Algorithm.
Cientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-
care-risk-algorithm/
Wen, M., Bastani, O., & Topcu, U. (2021). Algorithms for Fairness in Sequential Decision Making. In
A. Banerjee & K. Fukumizu (Eds.), Proceedings of The 24th International Conference on Artificial
Intelligence and Statistics, PMLR (Vol. 130, pp. 1144–1152).
https://doi.org/10.48550/arXiv.1901.08568
Wiens, J., Price, W. N., & Sjoding, M. W. (2020). Diagnosing bias in data-driven algorithms for
healthcare. Nature Medicine, 26(1), 25–26. https://doi.org/10.1038/s41591-019-0726-6
World Economic Forum. (2019). Global Gender Gap Report 2020 (Insight Report, pp. 1–371). World
Economic Forum. https://www3.weforum.org/docs/WEF_GGGR_2020.pdf
Zhang, X., Khalili, M. M., Tekin, C., & Liu, M. (2019). Group Retention when Using Machine
Learning in Sequential Decision Making: The Interplay between User Dynamics and Fairness.
Advances in Neural Information Processing Systems, 32, pp. 1-10.
Zottola, S. A., Desmarais, S. L., Lowder, E. M., & Duhart Clarke, S. E. (2022). Evaluating Fairness of
Algorithmic Risk Assessment Instruments: The Problem With Forcing Dichotomies. Criminal Justice
and Behavior, 49(3), 389–410. https://doi.org/10.1177/00938548211040544

16

View publication stats

You might also like