Diversity Bias in Artificial Intelligence: See Discussions, Stats, and Author Profiles For This Publication at
Diversity Bias in Artificial Intelligence: See Discussions, Stats, and Author Profiles For This Publication at
net/publication/380552000
CITATIONS READS
2 590
3 authors, including:
All content following this page was uploaded by Ilse Hagerer on 31 May 2024.
Eva Gengler
Eva is a researcher at the Friedrich-Alexander-Universität Erlangen-Nürnberg with focus on AI,
power, and feminism as well as an entrepreneur, board member, and voice for feminist AI.
                                                                                                         1
Introduction
The ever-increasing digital transformation causes profound changes in many areas of life,
especially through emerging disruptive technologies such as artificial intelligence (AI). AI
tools like Chat GPT generate text, and prompt discussions on copyright, ethics, and human
uniqueness, – and can be a major source for many applications in business and private context,
including coachbots. As algorithms within AI tools can practice discrimination, emerging
coachbots powered by AI can be discriminatory. In the following chapter, diversity as a concept
as well as its interconnectedness with AI is explained. Then, many examples of biased AI in
different business sectors are presented. Next to providing solutions to AI bias, the
consequences of discriminatory AI for people in everyday life are laid out. The goal of this
chapter is to illuminate the problems that have emerged due to a lack of diversity in AI and to
show what solutions exist to address them. When coachingbots replace human coaches – as
examples from other sectors demonstrate – , the effects can be tremendous. The questions
raised, the answers given, and the objectives pursued today will impact the decisions of
upcoming decades.
Diversity as a concept
Diversity as a concept emerged in the 1980s to capture the dimensions of differences in society
like gender, race, or sexual identity. In recent years, this concept has evolved to include
numerous other attributes, such as social background, ability, and religion. While previously
the focus of the term diversity lay on aspects emphasising social divisions, recently it
concentrates on the positive aspects of difference. Consequently, the focus shifted from a
narrow to a more pluralistic and diverse worldview also incorporating the powerful feminist
concept of intersectionality (Ahonen, 2015).
                                                                                               2
be disadvantaged when applying for a leadership position and so might Black applicants. Black
women, consequently, incorporate both disadvantaged attributes and thus, face much worse
disadvantages than either group individually.
       Diversity in legislation. Diversity is not only a concept but is also valued and protected
within international and national legislation. In these frameworks, the focus lies on protecting
diversity and prohibiting discrimination based on diversity attributes. Among others, the
Universal Declaration of Human Rights grants all human beings equality, dignity, and freedom
“[…] without distinction of any kind, such as race, colour, sex, language, religion, political or
other opinions, national or social origin, property, birth or another status.” (Unnited Nations,
1948, Article 2). Also, national legislation includes non-discrimination regarding diversity
characteristics, e.g., the German General Act on Equal Treatment. In recent years, increasing
numbers of judicial decisions focus on non-discrimination of diversity attributes. Yet, not all
characteristics of diversity are protected by law.
       AI-facilitated racial bias. A frequently mentioned bias related to AI is racial bias. The
term race can be associated with skin colour and varying experiences of discrimination. It is
often distinguished by “darker” (“Black”) and “lighter“ (“White”) skin colour (Burlina et al.,
2021), which is used, for instance, to examine the results of face recognition algorithms (Shi et
al., 2020) or medical diagnostics: If a person goes to a doctor suffering from vision loss due to
retinal problem, the person may go through an automated diagnostics procedure. In this
process, AI is used to test for diabetic retinopathy and to assist medical professionals by
                                                                                              3
interpreting image scans of the retina. Because the presumed skin pigmentation relates, on
average, to the concentration of melanin, and subsequently retinal colouration, the performance
of AI diagnostics algorithms may be less accurate and therefore result in bias to the
disadvantage of individuals of diverse race (Burlina et al., 2021). An equitable AI diagnostic
system should assign different ethnic groups with the same probability of having diabetic
retinopathy (Burlina et al., 2021). Similar biases are confirmed in numerous studies: e.g.,
Obermeyer et al. (2019) found that a healthcare algorithm, which is applied to roughly 200
million people in the U.S. every year, reduced the number of Black people receiving additional
healthcare treatment by more than 50%, even though they had the same chronic illnesses as
White people. Furthermore, racial bias was found in face recognition algorithms (Shi et al.,
2020) or applications assessing job applicants (Köchling et al., 2021).
   Reasons for biased AI systems. As touched upon above, mainly three aspects contribute to
biased AI systems. First, people are inherently biased and have a tendency to use stereotypes
to inform decision making. Their worldviews shape the way AI is programmed and the context
in which it is used. Second, the training data often incorporates these biases, thus influencing
the AI to make sexist, misogynistic, classicist, racist, and ableist decisions. Biases may also
lead to incomplete datasets: For instance, the gender data gap – namely the missing data on
women, trans, and genderqueer people – is a well-known problem in a wealth of domains
(Criado-Perez, 2020). When data is missing, AI cannot learn from it and thus, its output might
disadvantage those who do not fit the “norm”. Third, there is an overrepresentation of White
men in decision-making positions in the field of AI (Nuseir et al., 2021). On the one side, this
includes the developers of AI who make technical decisions on aspects to include and to omit.
                                                                                             4
On the other side: the people deciding upon budgets, the domains in which AI is to be created,
and whom to employ. This lack of diversity in the field leads to questions remaining untackled
and system misfunctioning undetected. With the emergence of AI, systems are created that
build on data, logic, and power relations from the past. Biased AI, when in use, can cause
discrimination and thus, become a discriminatory AI system.
       Face recognition. There were recent advances in machine learning with face
recognition algorithms, but their performance is highly biased: They perform better on males
                                                                                                5
than females and have difficulties identifying children and elderly (Buolamwini & Gebru,
2018; Smith & Ricanek, 2020). Particularly ethnicity, skin colour, and facial shape affected the
accuracy of face recognition systems (Serna et al., 2019). E.g., Google Photos identified Black
people as gorillas (Snow, 2018) and Nikon’s camera software Asians as constantly blinking
(Rose, 2010).
       Healthcare system and diagnostics. Clinical trials mainly exist for men and less data
is available for women resulting in biases against women in the diagnosis and treatment of
diseases (Hamberg, 2008). Research in the field of gender medicine highlights the differences
between women and men in diagnosis and treatment as essential to achieving equity in
healthcare across genders (Baggio et al., 2013). As heart diseases in the past were seen as a
predominantly male problem, less attention was paid to the symptoms of women, resulting in
unbalanced datasets and replicated biases (Paviglianiti & Pasero, 2020). Racial minorities
suffer from inferior access to treatment in medical care due to unconscious bias in medical
decision-making (Chapman et al., 2013; Vartan, 2019).
       Word embeddings. Individual words with similar meanings are created by capturing
semantic relations of words in large text corpora. Online sources like Wikipedia or Google
News provide such training data. Word embeddings show how gender ideology inherent in
language can lead to gender-biased systems: Bias can be incorporated in different linguistic
features like stereotypical descriptions (e.g., if the word embedding for man is doctor, but for
woman it is nurse), as well as the listing of the male first, or the underrepresentation of women
in texts (Leavy, 2018).
   Coaching. To date, there has been little research into possible biases in the multiple
emerging AI coachbots. Unlike other sectors, many AI technology coaching start-ups are led
by women. However, given these bots usually access large language models such as ChatGPT,
they are likely to suffer similar biases as is the case with other industries. Thus, developers and
users are encouraged to be aware of possible biases ingrained in choachbots as these systems
gain wider traction in both organisations and by individuals (Passmore & Tee, 2023).
Solutions to AI bias
Solutions to reduce discrimination by AI lie in the whole AI lifecycle: Creating equitable
algorithms requires ethical fundaments, education, and diversity of AI experts; furthermore,
bias mitigation techniques help to properly adapt algorithms, and finally, more objective
                                                                                                6
decision-making as well as corporate AI governance need to be ensured. Thus, a holistic
perspective is needed when aiming to resolve existing power imbalances and biases in AI.
       Ethical fundaments. When it comes to AI, context matters. This becomes apparent
when looking at different use cases of originally very similar AI systems. Computer vision, for
instance, might be used for bird protection in wind turbines. Misinterpretation in this instance
has far different consequences than the use of computer vision in border control scenarios,
where vulnerable people such as refugees might be erroneously targeted by border protection
mechanisms. AI, as any other tool, is not inherently good or bad. It may be used in evil or
virtuous ways and for evil or virtuous reasons. Therefore, it is fundamentally important to
consider the context in which AI is designed and trained as well as the context in which it is
(supposed to be) used. Thus, the objectives behind the use of AI and the objectives implanted
in these systems should be transparent and verifiable. As a vital step towards this objective,
ethical fundaments for AI development are required.
                                                                                               7
       Ethical data and algorithms. To reduce bias, it is not only important to enhance the
algorithms themselves but also to improve underlying datasets by using unbiased sources and
integrating fairness evaluations. Data science teams ought to reflect the population for which
the algorithm is designed and include, e.g., outliers and diverse groups or visual analytics tools
to discover intersectional bias (Fahse & Huber, 2021). Enhanced algorithms have bias concerns
directly embedded in their systems. Additional recommender systems automatically select the
algorithm with the maximum accuracy and diversity trade-off (Gutowski et al., 2021). Before
the go-live and in the application of AI , fairness evaluations and audit processes ought to be
applied to check for biases (Bryant & Howard, 2019). Since AI learns over time, it requires
continuous assessments to ensure fair outputs (Parikh et al., 2019).
                                                                                               8
However, often this is not feasible because of the numerous steps and inputs the algorithm uses.
Furthermore, algorithmic outcomes require human supervision and trust in their maintenance
(Fahse & Huber, 2021). Likewise, Bîgu and Cernea (2019) recommend recruiters to use AI-
based hiring software only for support and not for final decision-making. The combination of
algorithmic and human controls has the potential to avoid bias (Wiens et al., 2020). In the
workplace, data protection ought to be regulated collectively with employee representatives or
with policies that incentivise firms to be aware of protected groups (Fu et al., 2022).
                                                                                              9
       Power imbalances and biased norms perpetuated through AI. Having read the
numerous examples presented above, the question might remain how discriminatory AI is
affecting people in their everyday life, within their casual social media behaviour. This
concluding paragraph is supposed to support a reflective mindset in one’s own usage of AI-
induced products and to raise awareness of such usage among children and teenagers as
vulnerable groups.
       Biases in both people and AI. One of the quickest perceptions of someone else is their
looks. What is seen repeatedly, is what is perceived as the given or at least it is something
people are not bothered by. If the people surrounding me look like me, I probably do not find
it noticeable. At the same time, people are also influenced by what they see, not only in direct
interaction, but also in the media: in the films and the series watched, in the magazines read,
and of course in what is consumed online, on the Internet and on social media. Positions of
power in the media have historically been male dominated: a small number of White
heterosexual men decided what is appropriate for consumers to watch on TV, in cinemas, and
to read and view in print media. In the age of social media, everyone with a smartphone can be
a consumer and content producer. However, if the algorithm - dictating what is shown on the
feeds people consume within social media apps - is primarily trained with data depicting a
certain image, people are influenced in their perception of others. If the algorithm pushes
photos of people with a certain body type for more visibility on a social media app, it can
reproduce overtly sexualised and pressure-inducing images of women. What if someone has
fun expressing themselves on social media but it is harder for them to get views because an
algorithm prefers to give visibility to photos of primarily thin White models with unnatural
facial features?
                                                                                                10
receives the wrong (or no) medical treatment – and all of that and more because of algorithms
deciding favourably for the already privileged and unfavourably for the previously
underprivileged.
       AI as a powerful tool. All these implications show how influential and transformative
the institutions and individuals who design, develop, decide, and use AI are – and that more
diversity in all these areas influencing algorithmic decision-making is required. Technology
embodies society. With the state AI is in now, you could picture it as people from a society
standing in a room with many doors representing gateways to opportunities, chances, access to
resources, etc. But the amount of doors you see and could open to fulfill your individual interest
and needs depends on aspects that should not divide a society but should be treated equally.
These include different diversity dimensions relating to gender, race, ethnicity, age,
socioeconomic status, geographical location, religion, disability, sexual identity, language, and
more. Otherwise, AI embodies and perpetuates the prejudices and injustices of “real life”, in
policies, the professional environment, and private lives.
       Chances for diversity through AI. How could AI be a driver for diversity, justice, and
equity? One possibility are people with influence in the AI realm who can bring more diversity
into AI teams and have AI be trained with unbiased datasets. Another option are bias mitigation
techniques and a fair decision-making process. Such aspects being incorporated in AI
governance plays a strong role, too. Intersectional and inclusive feminism is a perspective that
includes all of these aspects and makes it feasible to tackle the sources of discriminatory AI
systems. It can shape AI in a way so that AI does not only omit biases, but promotes diversity,
makes power imbalances and biases visible, thrives business value, and is used in cases that
fulfill Human Rights and Sustainable Development Goals.
Q3: What can each and every one of us do to make AI more diverse and equitable?
Conclusion
The interrelatedness of AI and diversity is manifold and complex. Too less diversity in society,
datasets, and decision-making are reflected by AI systems that disadvantage those with diverse
attributes. This is evident in the many instances of biased AI in a broad range of business
                                                                                               11
sectors. Non-diversity in AI development teams and decision-makers results in the exclusion
of relevant voices and overlooked erroneous system behaviours. Thus, prevailing power
structures and inequalities as well as biases are perpetuated and even intensified by AI, possibly
resulting in a backlash for diversity in society, datasets, and decision-making. This vicious
circle needs to be broken by embracing diversity as a chance and incorporating ethical
fundaments, datasets, and algorithms with techniques to mitigate bias, and more objective and
fairer decision-making. If successful, AI can become a driver for diversity transforming our
world into a more equitable place for all.
                                                                                               12
References
Ahonen, P. (2015). Ethico-politics of diversity and its production. The Routledge Companion to
Ethics, Politics and Organizations.
https://www.academia.edu/12893448/Ethico_politics_of_diversity_and_its_production
Baggio, G., Corsini, A., Floreani, A., Giannini, S., & Zagonel, V. (2013). Gender medicine: A task for
the third millennium. Clinical Chemistry and Laboratory Medicine (CCLM), 51(4), 713–727.
https://doi.org/10.1515/cclm-2012-0849
Bertrand, J., & Weill, L. (2021). Do algorithms discriminate against African Americans in lending?
Economic Modelling, 104, 105619. https://doi.org/10.1016/j.econmod.2021.105619
Bîgu, D., & Cernea, M.-V. (2019). ALGORITHMIC BIAS IN CURRENT HIRING PRACTICES:
AN ETHICAL EXAMINATION. PROCEEDINGS OF THE 13th INTERNATIONAL
MANAGEMENT CONFERENCE, 1068–1073.
Bishop, S. (2021). Influencer Management Tools: Algorithmic Cultures, Brand Safety, and Bias.
Social Media + Society, 7(1), 205630512110030. https://doi.org/10.1177/20563051211003066
Biswas, S., & Rajan, H. (2020). Do the machine learning models on a crowd sourced platform exhibit
bias? An empirical study on model fairness. Proceedings of the 28th ACM Joint Meeting on European
Software Engineering Conference and Symposium on the Foundations of Software Engineering, 642–
653. https://doi.org/10.1145/3368089.3409704
Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of
Experimental Psychology: Applied, 27(2), 447–459. https://doi.org/10.1037/xap0000294
Bryant, D., & Howard, A. (2019). A Comparative Analysis of Emotion-Detecting AI Systems with
Respect to Algorithm Performance and Dataset Diversity. Proceedings of the 2019 AAAI/ACM
Conference on AI, Ethics, and Society, 377–382. https://doi.org/10.1145/3306618.3314284
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability
and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
Burlina, P., Joshi, N., Paul, W., Pacheco, K. D., & Bressler, N. M. (2021). Addressing Artificial
Intelligence Bias in Retinal Diagnostics. Translational Vision Science & Technology, 10(2), 13.
https://doi.org/10.1167/tvst.10.2.13
Chapman, E. N., Kaatz, A., & Carnes, M. (2013). Physicians and Implicit Bias: How Doctors May
Unwittingly Perpetuate Health Care Disparities. Journal of General Internal Medicine, 28(11), 1504–
1510. https://doi.org/10.1007/s11606-013-2441-1
Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of
Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal
Forum, Vol. 1989(1
Criado-Perez, C. (2020). Invisible women: Exposing data bias in a world designed for men. Vintage.
Dolata, M., Feuerriegel, S., & Schwabe, G. (2022). A sociotechnical view of algorithmic fairness.
Information Systems Journal, 32(4), 754–818.
                                                                                                     13
Edwards, J., Clark, L., & Perrone, A. (2021). LGBTQ-AI? Exploring Expressions of Gender and
Sexual Orientation in Chatbots, UI '21: Proceedings of the 3rd Conference on Conversational User
Interfaces. https://doi.org/10.1111/isj.12370
Fahse, T., & Huber, V. (2021). Managing Bias in Machine Learning Projects. In F. Ahlemann, R.
Schütte, & S. Stieglitz (Eds.), Innovation Through Information Systems (Vol. 7, pp. 94–109). Springer
International Publishing.
https://aisel.aisnet.org/wi2021/RDataScience/Track09/7?utm_source=aisel.aisnet.org%2Fwi2021%2F
RDataScience%2FTrack09%2F7&utm_medium=PDF&utm_campaign=PDFCoverPages
Fu, R., Aseri, M., Singh, P. V., & Srinivasan, K. (2022). “Un”Fair Machine Learning Algorithms.
Management Science, 68(6), 4173–4195. https://doi.org/10.1287/mnsc.2021.4065
Gu, J., & Oelke, D. (2019). Understanding Bias in Machine Learning (arXiv:1909.01866). arXiv.
http://arxiv.org/abs/1909.01866
Gutowski, N., Amghar, T., Camp, O., & Chhel, F. (2021). Gorthaur-EXP3: Bandit-based selection
from a portfolio of recommendation algorithms balancing the accuracy-diversity dilemma.
Information Sciences, 546, 378–396. https://doi.org/10.1016/j.ins.2020.08.106
Hamberg, K. (2008). Gender Bias in Medicine. Women’s Health, 4(3), 237–243.
https://doi.org/10.2217/17455057.4.3.237
Harris, C. G. (2020). Mitigating Cognitive Biases in Machine Learning Algorithms for Decision
Making. Companion Proceedings of the Web Conference 2020, 775–781.
https://doi.org/10.1145/3366424.3383562
Hauer, M. P., Adler, R., & Zweig, K. (2021). Assuring Fairness of Algorithmic Decision Making.
2021 IEEE International Conference on Software Testing, Verification and Validation Workshops
(ICSTW), 110–113. https://doi.org/10.1109/ICSTW52544.2021.00029
Jago, A. S., & Laurin, K. (2022). Assumptions About Algorithms’ Capacity for Discrimination.
Personality and Social Psychology Bulletin, 48(4), 582–595.
https://doi.org/10.1177/01461672211016187
Köchling, A., Riazy, S., Wehner, M. C., & Simbeck, K. (2021). Highly Accurate, But Still
Discriminatory: A Fairness Evaluation of Algorithmic Video Analysis in the Recruitment Context.
Business & Information Systems Engineering, 63(1), 39–54. https://doi.org/10.1007/s12599-020-
00673-w
Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in
machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software
Engineering, 14–16. https://doi.org/10.1145/3195570.3195580
Li, X. (2021). Analysis of Racial Discrimination in Artificial Intelligence from the Perspective of
Social Media, Search Engines, and Future Crime Prediction Systems: 6th International Conference on
Contemporary Education, Social Sciences and Humanities. (Philosophy of Being Human as the Core
of Interdisciplinary Research) (ICCESSH 2021), China. https://doi.org/10.2991/assehr.k.210902.029
Mann, M., & Matzner, T. (2019). Challenging algorithmic profiling: The limits of data protection and
anti-discrimination in responding to emergent discrimination. Big Data & Society, 6(2),
205395171989580. https://doi.org/10.1177/2053951719895805
Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Defining organizational AI
governance. AI and Ethics. https://doi.org/10.1007/s43681-022-00143-x
                                                                                                        14
Mehrabi, N., Naveed, M., Morstatter, F., & Galstyan, A. (2021). Exacerbating Algorithmic Bias
through Fairness Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10),
8930–8938. https://doi.org/10.1609/aaai.v35i10.17080
Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2021). Algorithmic Fairness: Choices,
Assumptions, and Definitions. Annual Review of Statistics and Its Application, 8(1), 141–163.
https://doi.org/10.1146/annurev-statistics-042720-125902
Nuseir, M. T., Al Kurdi, B. H., Alshurideh, M. T., & Alzoubi, H. M. (2021). Gender Discrimination at
Workplace: Do Artificial Intelligence (AI) and Machine Learning (ML) Have Opinions About It. In
A. E. Hassanien, A. Haqiq, P. J. Tonellato, L. Bellatreche, S. Goundar, A. T. Azar, E. Sabir, & D.
Bouzidi (Eds.), Proceedings of the International Conference on Artificial Intelligence and Computer
Vision (AICV2021) (pp. 301–316). Springer International Publishing. https://doi.org/10.1007/978-3-
030-76346-6_28
Nyarko, J., Goel, S., & Sommers, R. (2021). Breaking Taboos in Fair Machine Learning: An
Experimental Study. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–11.
https://doi.org/10.1145/3465416.3483291
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an
algorithm used to manage the health of populations. Science, 366(6464), 447–453.
https://doi.org/10.1126/science.aax2342
Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health
Care. JAMA, 322(24), 2377. https://doi.org/10.1001/jama.2019.18058
Passmore, J., & Tee, D. (2023). Can Chatbots replace human coaches? Issues and dilemmas for the
coaching profession, coaching clients and for organisations. The Coaching Psychologist, 19(1), 47–
54. https://doi.org/10.53841/bpstcp.2023.19.1.47
Paviglianiti, A., & Pasero, E. (2020). VITAL-ECG: A de-bias algorithm embedded in a gender-
immune device. 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT, 314–318.
https://doi.org/10.1109/MetroInd4.0IoT48571.2020.9138291
Puyol-Antón, E., Ruijsink, B., Piechnik, S. K., Neubauer, S., Petersen, S. E., Razavi, R., & King, A.
P. (2021). Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance
in Deep Learning Based Segmentation. In M. de Bruijne, P. C. Cattin, S. Cotin, N. Padoy, S. Speidel,
Y. Zheng, & C. Essert (Eds.), Medical Image Computing and Computer Assisted Intervention –
MICCAI 2021 (Vol. 12903, pp. 413–423). Springer International Publishing.
https://doi.org/10.1007/978-3-030-87199-4_39
Rodolfa, K. T., Lamba, H., & Ghani, R. (2021). Empirical observation of negligible fairness–accuracy
trade-offs in machine learning for public policy. Nature Machine Intelligence, 3(10), 896–904.
https://doi.org/10.1038/s42256-021-00396-x
Rose, A. (2010). Are face-detection cameras racist?
https://content.time.com/time/business/article/0,8599,1954643,00.html
Sen, S., Dasgupta, D., & Gupta, K. D. (2020). An Empirical Study on Algorithmic Bias. 2020 IEEE
44th Annual Computers, Software, and Applications Conference (COMPSAC), 1189–1194.
https://doi.org/10.1109/COMPSAC48688.2020.00-95
Serna, I., Morales, A., Fierrez, J., Cebrian, M., Obradovich, N., & Rahwan, I. (2019). Algorithmic
Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics
(arXiv:1912.01842). arXiv. http://arxiv.org/abs/1912.01842
                                                                                                     15
            Shi, S., Wei, S., Shi, Z., Du, Y., Fan, W., Fan, J., Conyers, Y., & Xu, F. (2020). Algorithm Bias
            Detection and Mitigation in Lenovo Face Recognition Engine. In X. Zhu, M. Zhang, Y. Hong, & R.
            He (Eds.), Natural Language Processing and Chinese Computing (Vol. 12431, pp. 442–453).
            Springer International Publishing. https://doi.org/10.1007/978-3-030-60457-8_36
            Singh, R., Agarwal, A., Singh, M., Nagpal, S., & Vatsa, M. (2020). On the Robustness of Face
            Recognition Algorithms Against Attacks and Bias. Proceedings of the AAAI Conference on Artificial
            Intelligence, 34(09), 13583–13589. https://doi.org/10.1609/aaai.v34i09.7085
            Smith, P., & Ricanek, K. (2020). Mitigating Algorithmic Bias: Evolving an Augmentation Policy that
            is Non-Biasing. 2020 IEEE Winter Applications of Computer Vision Workshops (WACVW), 90–97.
            https://doi.org/10.1109/WACVW50321.2020.9096905
            Snow, J. (2018). Google Photos Still Has a Problem with Gorillas. MIT Technology Review.
            https://www.technologyreview.com/2018/01/11/146257/google-photos-still-has-a-problem-with-
            gorillas/
            Todolí-Signes, A. (2019). Algorithms, artificial intelligence and automated decisions concerning
            workers and the risks of discrimination: The necessary collective governance of data protection.
            Transfer: European Review of Labour and Research, 25(4), 465–481.
            https://doi.org/10.1177/1024258919876416
            Unnited Nations. (1948, December 10). Universal Declaration of Human Rights. United Nations;
            United Nations. https://www.un.org/en/about-us/universal-declaration-of-human-rights
            Vartan, Starre. (2019, October 24). Racial Bias Found in a Major Health Care Risk Algorithm.
            Cientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-
            care-risk-algorithm/
            Wen, M., Bastani, O., & Topcu, U. (2021). Algorithms for Fairness in Sequential Decision Making. In
            A. Banerjee & K. Fukumizu (Eds.), Proceedings of The 24th International Conference on Artificial
            Intelligence and Statistics, PMLR (Vol. 130, pp. 1144–1152).
            https://doi.org/10.48550/arXiv.1901.08568
            Wiens, J., Price, W. N., & Sjoding, M. W. (2020). Diagnosing bias in data-driven algorithms for
            healthcare. Nature Medicine, 26(1), 25–26. https://doi.org/10.1038/s41591-019-0726-6
            World Economic Forum. (2019). Global Gender Gap Report 2020 (Insight Report, pp. 1–371). World
            Economic Forum. https://www3.weforum.org/docs/WEF_GGGR_2020.pdf
            Zhang, X., Khalili, M. M., Tekin, C., & Liu, M. (2019). Group Retention when Using Machine
            Learning in Sequential Decision Making: The Interplay between User Dynamics and Fairness.
            Advances in Neural Information Processing Systems, 32, pp. 1-10.
            Zottola, S. A., Desmarais, S. L., Lowder, E. M., & Duhart Clarke, S. E. (2022). Evaluating Fairness of
            Algorithmic Risk Assessment Instruments: The Problem With Forcing Dichotomies. Criminal Justice
            and Behavior, 49(3), 389–410. https://doi.org/10.1177/00938548211040544
16