0% found this document useful (0 votes)
19 views19 pages

Examining Inclusivity: The Use of AI and Diverse Populations in Health and Social Care: A Systematic Review

This systematic review examines the impact of AI on diverse and marginalized populations in health and social care, highlighting issues of inclusivity and regulatory concerns. The findings indicate significant disparities in AI outcomes due to biased data sources, which can exacerbate existing inequalities in care delivery. The authors call for reform in AI development practices and legal frameworks to ensure equitable application and promote inclusion and equity in healthcare.

Uploaded by

Ambrose Maina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views19 pages

Examining Inclusivity: The Use of AI and Diverse Populations in Health and Social Care: A Systematic Review

This systematic review examines the impact of AI on diverse and marginalized populations in health and social care, highlighting issues of inclusivity and regulatory concerns. The findings indicate significant disparities in AI outcomes due to biased data sources, which can exacerbate existing inequalities in care delivery. The authors call for reform in AI development practices and legal frameworks to ensure equitable application and promote inclusion and equity in healthcare.

Uploaded by

Ambrose Maina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Marko et al.

BMC Medical Informatics and Decision Making


(2025) BMC Medical
25:57 https://doi.org/10.1186/s12911-025-02884-1 Informatics and
Decision Making

SYSTEMATIC RE VIE W Open


Access

Examining inclusivity: the use of AI


and diverse populations in health and
social care: a systematic review
John Gabriel O. Marko1*, Ciprian Daniel Neagu1 and P. B. Anand2

Abstract
Background Artificial intelligence (AI)-based systems are being rapidly integrated into the fields of health
and social care. Although such systems can substantially improve the provision of care, diverse and
marginalized populations are often incorrectly or insufficiently represented within these systems. This
review aims to assess the influence of AI on health and social care among these populations, particularly
with regard to issues related to inclusivity and regulatory concerns.
Methods We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses
guidelines. Six leading databases were searched, and 129 articles were selected for this review in line with
predefined eligibility criteria.
Results This research revealed disparities in AI outcomes, accessibility, and representation among diverse
groups due to biased data sources and a lack of representation in training datasets, which can potentially
exacerbate inequalities in care delivery for marginalized communities.
Conclusion AI development practices, legal frameworks, and policies must be reformulated to ensure
that AI is applied in an equitable manner. A holistic approach must be used to address disparities, enforce
effective regulations, safeguard privacy, promote inclusion and equity, and emphasize rigorous
validation.
Keywords Artificial intelligence, Diverse population, Healthcare, Inclusivity in artificial intelligence,
Marginalized population

Background
Rationale of the study
Artificial intelligence (AI) is significantly restructuring
the healthcare landscape. Healthcare professionals are
leveraging AI to enhance diagnostic accuracy, optimize
patient-care planning, and improve ongoing monitor-
ing practices [1]. Additionally, AI can be used to navigate
*Correspondence: through vast medical datasets, revealing hidden patterns
John Gabriel O. Marko
j.g.o.marko@bradford.ac.uk and insights that clinicians can use to accelerate deci-
1
University of Bradford Facility of Engineering and Digital sion making and make informed decisions [2]. Further-
Technology, Bradford, UK more, AI affords advanced problem-solving strategies
2
University of Bradford Faculty of Management Law and Social
Sciences, Bradford, UK beyond traditional human capacities, enabling a nuanced

© The Author(s) 2025. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if
changes were made. The images or other third party material in this article are included in the article’s Creative Commons
licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons
licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/.
Marko et al. BMC Medical Informatics and Decision (2025) Page 2 of
Making 25:57 19

approach to medical challenges and supporting cutting-


Methods
edge and personalized healthcare [3]. However, these
To guide the systematic review process from the pre-
advancements are impeded by several challenges. The
liminary search phase to the final screening phase, the
equitable impact of AI, specifically its effects on diverse
Preferred Reporting Items for Systematic Reviews and
and marginalized populations, is attracting considerable
Meta-Analyses guidelines (PRISMA) [11], were followed
attention [4]. These populations already experience sys-
in this research. The computer-assisted qualitative data
temic healthcare disparities, and improperly designed
analysis software NVivo-14 [12] (Lumivero) was used to
or intrinsically biased AI systems may perpetuate these
facilitate efficient data management and analysis, and the
disparities [5]. Studies have shown that AI and machine
framework method [13] was employed.
learning (ML) models sometimes fail, specifically for
women, individuals from racial minority groups, and
Eligibility criteria
individuals with public insurance [6]. Moreover, some
A comprehensive selection for studies was conducted on
models have demonstrated biases, such as recommend-
the basis of the following eligibility criteria:
ing disparate treatments based on race and depriving
Black patients of crucial care management programs [7,
1. Studies specifically exploring AI systems’ use and
8]. Despite the recognition of these risks, studies address-
impact witflin flealtfl and social care settings,
ing the impacts of AI systems on these populations
including diagnostics, treatment, patient monitoring,
within the context of health and social care have limita-
and administration.
tions. Additionally, the current legal and ethical frame-
2. Studies on tfle effects of AI systems on diverse
works guiding AI applications often disregard diversity
and marginalized populations, witflin tfle flealtfl
and inclusivity, failing to protect marginalized popula-
and social care.
tions [9, 10].
3. Studies discussing tfle legal and etflical dimensions
of AI in flealtfl and social care, especially as tfley
Objectives
impact diverse and marginalized populations.
We systematically reviewed the available literature with
4. Original researcfl articles (including qualitative,
the goal of understanding the impacts of the AI systems
quantitative, and mixed-metflods researcfl), review
used in health and social care on diverse and marginal-
articles, and case studies publisfled in peer-reviewed
ized populations. Marginalized populations were defined
journals.
in terms of socioeconomic status, race, ethnicity, gender,
5. Studies publisfled in Englisfl only.
disability status, and sexual orientation; indigenous indi-
viduals, immigrants, and refugees were also included in
Information sources and search criteria
this category. We evaluated the adequacy of the exist-
The sample, phenomenon of interest, design, evalua-
ing legal and ethical frameworks to the task of ensuring
tion, research type (SPIDER) framework [13] was used
inclusivity and equity in the use of AI in healthcare.
to formulate eligibility criteria for studies and to develop
an effective search string that could ensure that this
research employed a comprehensive and rigorous review
approach. The SPIDER framework is particularly use-
ful for qualitative and mixed-method research. Table 1.
Table 1 Components of the SPIDER framework highlights the influence of each component of the SPI-
Component Framework DER framework on our search string.
Sample Studies focusing on diverse populations, The search string developed based on Table 1 was as
such as marginalized populations, follows: (“artificial intelligence” OR “machine learn-
underrepresented ing” OR “AI systems” OR “health AI” OR “AI in social
groups, ethnic minority groups, and persons care”) AND (“diverse populations” OR “marginalized
with disabilities.
Such terms were included in our search
populations” OR “‘underrepresented groups” OR “eth-
string. nic minorities” OR “persons with disabilities”) AND
Phenomenon The application and impact of AI systems (“impact” OR “effect” OR “consequences” OR “bias” OR
of interest in the context of health and social care, “discrimination”).
which led to the use of search terms such as
“Artificial intelligence”, “Machine learning”, “AI
Selection and sources of evidence
systems”, “Health AI”, and “AI in social care”.
Design Not limited to specific design types.
A methodical search was conducted on June 28, 2023,
Evaluation Addressed by search terms such as by using the aforementioned search string in six promi-
“impact”, “ef- fect”, “consequences”, “bias”, nent databases, namely, Google Scholar, Web of Science,
and “discrimination”. Embase, IEEE Xplore, Scopus, and PubMed (MEDLINE).
Research type All relevant studies, including quantitative,
qualita-
tive, and mixed-methods studies, were
Marko et al. BMC Medical Informatics and Decision (2025) Page 3 of
Making 25:57 19

Data, including titles, abstracts, keywords, authors’


the assistance of NVivo 14 [12]. This tool was used to cat-
names and affiliations, journal names, and publication
egorize and examine the data systematically; each row
year, were extracted from the records thus identified. This
represented an author, while columns indicated
information was transferred to Sysrev, a web-based plat-
different codes or themes that were identified during
form designed to facilitate data extraction, data curation,
the literature analysis. This matrix structure provided
and systematic review [14]. Two reviewers subsequently
concise over- views of the approaches to various
performed a comprehensive assessment of the records
themes taken by each author.
thus identified with the goal of determining whether the
inclusion criteria were met. Synthesis of the results
We used a dual approach to analyse the descriptive and
Risk of bias assessment
conceptual aspects of the studies. First, we examined
The risk of bias in the included studies was systemati-
the foundational data for these studies, including by
cally assessed by two independent reviewers to minimize
noting keyword frequencies, as illustrated in Fig. 1. We
individual bias and ensure a comprehensive evaluation.
subsequently used a framework methodology to extract
Any discrepancies were resolved through discussion,
and synthesize emerging themes, note preliminary pat-
with a third reviewer consulted if necessary to reach con-
terns, and establish a thematic framework on the basis of
sensus. NVivo 14 was employed to facilitate the qualita-
recurrent issues and concepts. Relevant study segments
tive data analysis, as suggested by Jackson and Bazeley
were assigned to these themes, and the coded data were
[15]. In addition, our analysis adhered to the framework
structured to facilitate comparative analysis. We traced
described by Gale et al. [13]. Notably, this method
patterns, relationships, and areas of contention across
enabled us to make comparisons both within and across
studies pertaining to each theme with the goal of obtain-
cases.
ing a comprehensive understanding of the subject.
Data charting and data items
Selection of the source of evidence
The preliminary search produced extensive data that
We initially identified 1,173 articles. After a preliminary
were efficiently managed using framework matrices with
screening, 955 articles were excluded because they did

Fig. 1 Item density visualization of the co-occurrence analysis of high-frequency keywords


Marko et al. BMC Medical Informatics and Decision (2025) Page 4 of
Making 25:57 19

not meet the eligibility criteria, were out of the scope of


Bias
the study, lacked sufficient methodological rigor, or were
Bias refers to the amplification of preexisting dispari-
published in non-peer-reviewed sources. The remaining
ties, often associated with socioeconomic status, race,
218 articles underwent a thorough evaluation. Among
ethnicity, religion, gender, disability status, or sexual ori-
these, 68 were identified as duplicates, 18 could not be
entation, which in turn exacerbates inequalities within
retrieved due to subscription barriers, 1 did not address
healthcare systems [16–18]. The integration of AI in
AI in healthcare, and 2 were letters to the editor. Con-
healthcare reveals several systemic limitations, notably in
sequently, the final review comprised 129 articles. The
the form of racial and ethnic disparities in conditions like
design of the search and screening stages is illustrated in
cardiovascular disease [19]. Addressing these disparities
Fig. 2.
requires systemic change, focusing on equity rather than
solely advancing treatments. Studies show that AI models
Results
frequently rely on datasets that fail to reflect the diver-
Syntheses of the results
sity of global patient populations, particularly in areas
This section presents the synthesis of our findings,
like medical imaging [20–23]. For example, dermatologi-
which are structured based on the thematic framework
cal AI models may accurately diagnose skin conditions
described in the Methods section.
in light-skinned individuals but perform poorly for those

Fig. 2 PRISMA flow chart for the stages of the systematic review
Marko et al. BMC Medical Informatics and Decision (2025) Page 5 of
Making 25:57 19

with darker skin due to underrepresentation in training


the need for caution when applying machine learning in
data [24]. These disparities extend to underrepresented
healthcare [53]. While AI holds the potential to extend
LGBTQ + communities, a topic that remains under-
specialized care to underserved populations, financial
researched [25, 26].
barriers could further deepen healthcare access inequali-
The COVID-19 pandemic has exposed how bias, dis-
ties [54, 55]. Emerging solutions like federated learning
crimination, and racism adversely affect health outcomes
offer potential to reduce biases; however, accessibility
[7, 27–32]. The increased adoption of digital healthcare
remains an issue. Smaller medical institutions may lack
solutions has raised concerns about exacerbating dispari-
the resources needed to adopt advanced AI technolo-
ties in digital access for disadvantaged populations [33,
gies, and the dominance of large corporations in AI could
34]. Virtual care, for example, may worsen health dispari-
limit its widespread use, thereby perpetuating healthcare
ties among underserved communities that lack reliable
inequalities [56].
access to digital technologies [35]. Furthermore, AI sys-
tems are susceptible to biases within health information
Regulations and policy
technologies, where the choice of datasets and outcomes
The integration of AI into healthcare brings immense
can influence unequal care delivery [36, 37]. Such biases
opportunities but also significant challenges, making
can affect the allocation of healthcare resources based on
the need for robust regulatory frameworks paramount.
demographic factors or introduce errors into language
While AI can enhance healthcare delivery, it also intro-
models used in clinical environments [38]. Racial bias
duces risks that must be carefully managed. Effective
has been observed in algorithms used to assess kidney
regulations are required to ensure the safety, efficacy,
function, which is critical in diagnosing and managing
and ethical deployment of AI technologies in health-
chronic kidney disease [39]. Similarly, facial recognition
care. Guidelines from bodies such as the World Health
algorithms in healthcare may misidentify individuals
Organization (WHO) stress the importance of safety and
from minority groups, leading to disparities in care [40,
effectiveness, alongside fostering dialogue among key
41].
stakeholders—developers, regulators, health workers,
Previous research on AI in healthcare has primar-
and patients [57, 58]. The broader regulatory landscape is
ily used retrospective data, which, while informative,
evolving, with several countries implementing standards
often inherits previous biases and fails to capture real-
to govern AI’s role in healthcare. However, many regula-
time clinical nuances [42–44]. Geographic disparities
tions remain insufficient in comprehensively addressing
in AI model training further limit the global applicabil-
the complex issues AI presents. A variety of interna-
ity of these systems and introduce additional biases [42,
tional standards currently guide the development and
45]. Dataset imbalances can compromise the predic-
deployment of AI in healthcare. For instance, the Euro-
tive accuracy of AI models, particularly for underrep-
pean Commission’s Trustworthy AI guidelines, the USA’s
resented groups [46–48]. Clinical trials, a key part of
AI Bill of Rights, and Health Canada’s focus on product
medical research, also face representation issues. Despite
safety and data privacy provide frameworks to safeguard
the higher prevalence of conditions like congenital
AI’s use [59, 60]. The UK’s Medical Device Regulations
heart disease among Black and Hispanic populations,
and the Data Protection Act 2018 also play pivotal roles.
these groups remain underrepresented in pivotal trials
Despite these efforts, AI remains prone to bias, and exist-
[19, 49]. Inadequate audit mechanisms that fail to
ing frameworks fall short in addressing this bias compre-
account for shifting population risks further heighten
hensively [20, 61, 62]. The need for stronger standards
the dangers faced by underserved communities [50].
and more detailed benchmarking processes to guide clin-
Socioeconomic factors, including education level,
ical efficacy and cost-effectiveness is evident [63, 64].
residential location, and economic status, significantly
One of the primary concerns in AI regulation is
impact health outcomes. Women from ethnic minor-
ensur- ing fairness, particularly for minority and
ity groups who live in poverty and are subject to gender
underrepre- sented groups. This is essential for
myths and stereotypes often experience more severe
achieving inclusivity in healthcare AI. AI systems must
health disparities [51]. For example, the healthcare
be adapted to respect global cultural norms while
costs associated with Black patients are often lower
actively mitigating biases [65]. For instance, AI’s use
than those for white patients, reflecting systemic
in diagnosing rare diseases requires careful
disparities in care access and barriers such as
consideration, as it may inadvertently lead to
discrimination and mistrust. Consequently, algorithms
discrimination. Strong legal protections, similar to the
that rely on cost as a primary metric may undervalue
Genetic Information Nondiscrimination Act of 2008,
the healthcare needs of Black individuals [48, 52].
are needed to safeguard against these risks [66]. Efforts
The uncritical use of biased models in clinical decision-
to ensure inclusivity align with the UN’s Sustain- able
making carries significant implications, underscoring
Development Goals, urging healthcare providers to
Marko et al. BMC Medical Informatics and Decision (2025) Page 6 of
Making 25:57 19

prevent the exclusion of vulnerable populations, particu-


diagnoses based on biased data [80]. For example, AI
larly women [51].
could potentially be used to predict sexual orientation or
Another critical challenge lies in regional disparities
genetic predispositions, raising ethical concerns about
in AI governance. For example, African countries face
discrimination. These issues highlight the need for robust
significant gaps in AI-related regulations, highlighting
privacy safeguards and ongoing exploration of ethical
the urgent need for digital health strategies and clear
principles in AI healthcare applications [50]. Further-
frameworks around AI liability [45]. The Global
more, AI-based mobile health applications pose risks
South’s underrepresentation in AI development also
of data loss, leakage, and manipulation, which threaten
raises con- cerns about the perpetuation of global health
individual privacy and security [81]. Protecting patient
disparities and the legacy of colonialism in healthcare
data and ensuring ownership are vital to preventing the
access [54, 67, 68]. Such discrepancies illustrate the need for
misuse of AI-generated diagnoses or management rec-
more cohe- sive global approaches to AI governance. Trust is
ommendations that could lead to stigmatization [58, 66].
another key issue in AI’s integration into healthcare,
Parental concerns about the privacy of their children’s
particularly in sensitive areas such as end-of-life care
health data are particularly relevant in the context of AI
[69]. Con- cerns about data privacy, patient autonomy,
in healthcare. Parents may worry about how their child’s
and consent are heightened when AI is involved in
data is being used and whether it is shared transparently
critical decision- making processes [58]. Inconsistent
and consensually [82]. It is crucial that healthcare plat-
interpretations of data protection regulations across
forms ensure that sensitive data is handled discreetly and
different jurisdictions further complicate trust-building
only shared with appropriate professionals and guardians
efforts [70]. To ensure ethical AI deployment, diverse
[80, 83]. The rapid increase in data collection during the
stakeholder engagement is necessary to safeguard data
COVID-19 pandemic has further heightened concerns
integrity, patient confi- dentiality, and fair treatment
about the potential for future discrimination against chil-
[71, 72]. Finally, addressing the inherent biases within
dren based on the collected data [58].
AI systems remains a signifi- cant challenge. AI
algorithms must be transparent and accountable,
Inclusion
particularly when used in high-stakes con- texts like
Inclusion involves ensuring that all individuals, regard-
public health and justice [55, 73]. The discrep- ancies
less of their unique characteristics, are represented and
between human and algorithmic decision-making
able to participate fully in any setting [47]. In the context
highlight the importance of creating standards to ensure
of AI, the lack of diversity in datasets leads to inaccura-
consistency across demographic groups. Detailed per-
cies, especially for marginalized groups whose health
formance reports for AI models used in clinical settings
issues are often overlooked. Therefore, creating balanced
are essential to maintain trust and accountability [74].
datasets and employing diverse metrics are crucial for
Additionally, educating healthcare professionals on
developing accurate and equitable AI models [47]. AI sys-
how to detect and address implicit biases in AI tools
tems are not inherently neutral, which means that tools
can miti- gate some of these risks. While AI holds the
should be intentionally designed to prevent bias and pro-
potential to enhance healthcare, ongoing dialogue
mote inclusivity.
among ethicists, developers, and clinicians is critical to
Diverse perspectives are essential throughout the AI
developing effec- tive, unbiased AI systems [75, 76].
development process, from conception to evaluation.
Incorporating gender, sex, and socioeconomic factors
Privacy
is particularly important in addressing the health and
Ensuring privacy in AI-driven healthcare applications is
accessibility challenges faced by marginalized popula-
a complex challenge that requires careful consideration
tions, including women and individuals with disabilities
of inclusivity, equity, and data security. Anonymizing
[84]. This focus on inclusivity enhances the accessibility
sociodemographic and clinical data is essential for pro-
of AI tools and ensures that they serve a wide range of
tecting individuals, particularly from minority communi-
users [51]. Promoting user-centered design that focuses
ties, and enables researchers to monitor health disparities
on accessibility and usability aligns with the broader
without compromising privacy [77]. While digital health-
goal of democratizing AI [55]. Community engagement
care has improved data transfer efficiency, it has also
is crucial for building inclusive AI systems in healthcare.
introduced new challenges related to data auditing and
Actively seeking input from marginalized communities
security, especially as AI increases the risk of reidentifi-
throughout the design and implementation of AI sys-
cation through both direct and indirect identifiers [55,
tems is essential. This ensures that these tools account
56, 78, 79]. AI algorithms can sometimes detect unin-
for the specific needs and nuances of diverse individu-
tended patterns in data, leading to potential privacy vio-
als and communities [85]. For example, involving indig-
lations. This can include inferring sensitive information
enous communities in the development of AI-powered
like ethnicity from medical images or making incorrect
Marko et al. BMC Medical Informatics and Decision (2025) Page 7 of
Making 25:57 19

telehealth solutions can help ensure that these solutions


Validation
are culturally appropriate and address the unique health-
The validation of AI systems in healthcare is essential
care needs of these communities. This approach helps AI
to ensure their safety, efficacy, and reliability.
serve diverse populations more effectively. In addition to
Although AI research is growing, few applications have
community engagement, patient-centric care is another
undergone the rigorous clinical validation necessary for
vital aspect of inclusion. By integrating diverse data
real-world use. Without proper validation, concerns
sources, such as natural language processing (NLP), AI
about reproducibil- ity, generalizability, and algorithmic
models can capture the lived experiences and narratives
design persist, limit- ing trust in AI technologies in
of patients, improving personalized care delivery [86, 87].
clinical settings [42, 56]. Many standards, particularly
Finally, creating diverse oversight committees—including
those involving AI-based medical devices, lack
experts from various fields and patient representatives—
sufficient validation, underscoring the need for real-
ensures balanced and informed decision-making. Such
world evidence to confirm their effec- tiveness [59,
committees enhance the credibility of AI-driven health-
100]. Machine learning (ML) studies based on
care research by addressing concerns around inclusivity
electronic health records often lack demographic
and helping to ensure that AI systems meet the needs of
diversity, which can compromise fairness in AI models.
all populations [88].
Including diverse training data and ensuring transpar-
ency are key to promoting fairness and accuracy. Addi-
Equity
tionally, improved reporting guidelines can enhance both
AI has a dual role in healthcare equity: it can either be a
representation and reproducibility in these studies [73].
powerful tool for promoting fairness or a mechanism that
Regulatory bodies worldwide have recognized the impor-
exacerbates existing disparities [89, 90]. When
tance of empirical evidence and foundational method-
designed and applied thoughtfully, AI can fine-tune
ologies to support the development and validation of
resource allo- cation, ensuring that the needs of
AI models, particularly in terms of safety, efficacy, and
vulnerable populations are met. However, without
equity [38, 101, 102]. Comprehensive clinical tests and
intentional efforts to miti- gate bias, AI risks
verifications are crucial for building trust in AI, as
perpetuating inequities in healthcare delivery and
these tests determine the precision of AI diagnostics in
access [89, 91]. To ensure that AI promotes equitable
clini- cal environments and assess their societal impact
outcomes, continuous fairness monitoring and inclusive
[103, 104]. Validating models with diverse patient
data management are essential. AI models must be built
populations promotes inclusivity and empowers
on diverse, representative datasets to prevent biased
patients by provid- ing clear information about
outcomes that disproportionately affect margin- alized
treatment risks and bene- fits, rather than technical
groups. For instance, AI could be used to address
explanations, thus supporting informed decision-
disparities in preventive screenings by identifying com-
making [66, 105]. Validation must also involve
munities with low access to critical services, thus help-
analysing independent datasets and tailor- ing them to
ing to improve healthcare equity. Similarly, ensuring
clinical outcomes [106]. While AI developers employ
that clinical trials include diverse participant
various methodologies and datasets, validation remains
populations can enhance the fairness of AI-driven
vital for ensuring effectiveness in different clini- cal
healthcare systems [92, 93]. Natural Language Processing
settings, as success in one domain does not guarantee
(NLP) further contrib- utes by integrating diverse data
success in another [107, 108]. Moreover, the performance
sources, enabling a more comprehensive understanding
of AI models depends on data quality, variability, and
of patient experiences and improving patient-centered
design. Retrospective evaluations have their limita-
care [89, 94]. A smooth tran- sition from recognizing
tions, making real-time validation crucial for an
AI’s potential to the strategies needed for equitable
accurate assessment of AI tools [109–111]. Validation
outcomes brings us to the ethical challenges of AI in
is particu- larly challenging in resource-limited settings,
healthcare. Developing a clear ethi- cal framework is
where data quality and availability may be constrained.
vital, one that prioritizes fairness and equity in
Investing in robust data infrastructure can simplify the
algorithmic decision-making [95–97]. A notable concern is
validation process and improve AI reliability in such
the misapplication of algorithms that mistak- enly treat
environments [112]. Research has shown that validated
race as a biological factor rather than a social construct,
AI diagnostic tools can serve as supplementary
leading to biased clinical decisions [98, 99]. To address
methods to confirm doctors’ recommendations,
these issues, experts have proposed a compre- hensive
alleviate patient concerns, and identify discrepancies
blueprint to advance health equity through AI. This
between AI assessments and clinical evaluations [113].
approach combines healthcare ethics with techno-
However, the use of AI without rigorous validation across
logical responsibility, ensuring that AI adheres to the
diverse real-world scenarios can lead to misdiagnoses.
“do no harm” principle while promoting fairness as it
AI models require thorough clinical validation,
contin- ues to shape healthcare [96, 97].
particularly when their diagnoses deviate from
established practices [114]. Contextual bias
Marko et al. BMC Medical Informatics and Decision (2025) Page 8 of
Making 25:57 19

arises when AI models trained on specific subpopulations and socioeconomic


fail to generalize across broader groups, emphasizing
the need for validation in diverse clinical environments
[115]. In-depth investigations are necessary to
understand the full impact of AI in healthcare, particularly
in clinical set- tings [116, 117]. Furthermore,
advancements in health literacy are hindered by
measurement challenges and the lack of comprehensive
validation across racial and ethnic groups, limiting the
development of effective AI-driven solutions [118].
Researchers have proposed the creation of distinct
authoritative bodies, such as in the pharma- ceutical
domain, to rigorously oversee AI validation pro- cesses
and facilitate AI integration into healthcare [119].
Ethical considerations are critical to the validation pro-
cess, requiring an understanding of sociocultural
factors and sociotechnical systems. Ethical decision-making
dur- ing model validation must account for trade-offs,
and data scientists must possess both ethical and
technical skills to navigate these challenges [120].

Global impact
The global impact of AI on health and social care is
mul- tifaceted, with varying outcomes depending on
regional introduction and regulatory approaches [121].
Regional variations in AI adoption highlight significant
differences across locations, with developed countries,
particularly in North America and Western Europe,
being more advanced in integrating AI into
healthcare compared to developing nations [122, 123].
These disparities stem from differences in
infrastructure, economic resources, and technological
readiness, affecting how AI is utilized in healthcare
settings. In regions with robust health- care systems, AI
applications are more readily accepted, often leading to
improved health outcomes, depending on the nature of
the AI-driven intervention [106]. How- ever,
geographical disparities in AI efficacy exist across
health fields and regions. For example, regions with
high AI adoption rates often experience enhanced
diagnostic accuracy, better treatment plans, and
improved patient outcomes [107]. Conversely, in areas
with insufficient resources or underdeveloped
healthcare infrastructures, the impact of AI is less
pronounced, potentially leading to disparate health
outcomes [124, 125]. The regulatory landscape for AI
in healthcare also varies significantly across countries.
Ethical, legal, and privacy concerns related to AI use
differ depending on regional regula- tory frameworks.
Countries with well-established regula- tions are better
equipped to address issues such as data protection,
algorithmic bias, and patient privacy [126].
Additionally, ethical considerations regarding the
global use of AI in health and social care are influenced
by regional differences in cultural, linguistic, and
socioeco- nomic diversity, which require tailored
approaches to AI implementation [127]. Geographical
Marko et al. BMC Medical Informatics and Decision (2025) Page 9 of
factors
Making play a crucial role in determining the availability
25:57 19
and accessibility of AI-powered healthcare services in
dif- ferent regions. Areas with wide economic disparities
face challenges in ensuring equitable access to AI
technolo- gies, potentially exacerbating existing health
inequalities if these challenges are not addressed [128,
129].

Public perceptions
Recent research on public perceptions and trust in AI-
driven health interventions has revealed evolving atti-
tudes, which are crucial for assessing AI’s overall impact
on healthcare and social care [106, 130]. A key focus of
these studies has been the growing public awareness and
education surrounding AI in healthcare. As individu-
als gain more knowledge about AI’s potential benefits
and limitations, their attitudes begin to shift [107, 131].
Educational programs play a vital role in correcting mis-
conceptions and building trust, especially among groups
with varying levels of familiarity with AI technologies
[132]. Beyond increasing awareness, building trust is
essential for the successful integration of AI into health-
care. Trust-building efforts by healthcare institutions and
AI developers are critical to securing public acceptance.
Open discussions about AI’s use in healthcare, particu-
larly those that emphasize data privacy, bias reduction,
and fairness, can significantly enhance public confidence
in AI technologies. Incorporating diverse user feedback
during the development process ensures that AI sys-
tems are reliable and reflect the values of different social
groups [133, 134]. Additionally, cultural sensitivity in
AI design and deployment has been shown to improve
public trust. AI technologies that respect and integrate
cultural norms and values are more likely to be seen as
thoughtful and respectful, increasing trust across diverse
populations [131, 135–138]. Ethical considerations and
accountability measures also play a key role in shaping
public perceptions. When people believe that AI systems
adhere to ethical principles and are accountable for their
decisions, their trust in the technology strengthens.
Public trust is further enhanced when AI technolo-
gies demonstrate awareness and respect for cultural dif-
ferences within healthcare practices. Culturally sensitive
AI applications are perceived as more considerate, which
fosters trust among diverse groups [69,139]. Bias in AI
algorithms is another major factor influencing public
perception. Studies show that people, particularly those
from marginalized communities, are more likely to trust
AI systems that actively mitigate biases. Promoting fair-
ness and equality in AI applications has a positive impact
on public trust, especially among diverse populations
[140, 141].
The intersectionality of trust dynamics has emerged as
a key theme in recent studies. Trust in AI-driven health-
care interventions is influenced by multiple
factors,
Marko et al. BMC Medical Informatics and Decision (2025) Page 10 of
Making 25:57 19

such as race, gender, socioeconomic status, and culture.


The current regulatory frameworks for AI in health-
Understanding these intersecting dynamics is essential
care are struggling to keep pace with its rapid evolution
for tailoring communication strategies and trust-build-
and unique challenges [146]. Existing regulations often
ing initiatives to specific demographic groups [142, 143].
lack specificity and does not sufficiently account for the
Public attitudes towards AI reflect a mix of optimism and
distinct attributes of AI, such as its capability to create
apprehension. On the positive side, many people appre-
synthetic imaging for medical diagnostics, augmenting
ciate AI’s potential to improve health, advance
traditional imaging techniques and potentially leading
scientific discovery, and enhance efficiency. However,
to earlier and more accurate diagnoses. However, regu-
concerns persist around the impact of AI on decision-
lations need to address the validation and safety of such
making, privacy, and the need for regulation. Ethical
AI-generated images. AI algorithms can continuously
issues, such as bias and discrimination, also play a
learn and refine their predictions of patient outcomes
significant role in shaping public perceptions of AI.
based on real-time data analysis. This evolving nature of
Addressing these con- cerns is critical to responsible AI
AI necessitates adaptive regulatory oversight to ensure
development and gov- ernance in healthcare.
ongoing accuracy and reliability. This lack of regula-
tory clarity hinders effective oversight and poses risks to
Discussion
patient safety [58–62]. A more dynamic and adaptive reg-
Discussion of the main results
ulatory approach is urgently needed, one that can evolve
This systematic review has illuminated the complex land-
alongside AI technology while mandating transparency,
scape of AI integration in healthcare, revealing a terrain
explainability, and regular audits for bias and discrimina-
marked by both transformative potential and significant
tion. This approach should consider the entire lifecycle
challenges. While AI offers promising advancements
of AI in healthcare, from development and validation to
in diagnostics, treatment, and patient care, it also raises
deployment and ongoing monitoring, ensuring that AI
critical concerns about bias, regulation, privacy, and
technologies are used safely, ethically, and effectively for
inclusion, particularly for marginalized populations. To
the benefit of all patients.
systematically analyse these findings, Table 2 provides a
Privacy concerns, particularly for minority communi-
comprehensive framework categorizing the key param-
ties, emerged as a critical area of concern.
eters and considerations across eight critical domains
Unintentional release or breaches of sensitive data,
affecting AI implementation in healthcare settings. The
such as ethnicity or social status, can exacerbate
table reveals the interconnected nature of challenges fac-
existing disparities and fuel further bias in AI systems [55,
ing AI adoption in healthcare, from bias and regulatory
56, 78–80]. Robust privacy safeguards, including data
concerns to privacy and public perception. Each category
minimization techniques and de-identification methods,
represents a crucial aspect of healthcare AI implementa-
are essential to protect patient privacy and prevent
tion that must be carefully considered to ensure equitable
violations that disproportionately affect vulnerable
and effective deployment.
populations.
Our analysis identified pervasive biases in AI models,
The review underscored the dual role AI can play in
notably related to race, gender, and socioeconomic sta-
either exacerbating or mitigating health inequities. To
tus, similar findings have been reported in recent
ensure AI serves as a tool for equity, proactive
studies, such as [144, 145], which corroborates our
measures are necessary. These include developing and
observations and highlights the urgency of addressing
implement- ing bias mitigation algorithms, promoting
these biases. These biases are deeply rooted, stemming
the use of explainable AI (XAI) to foster transparency,
from unrepre- sentative datasets, algorithmic design, and
ensuring diversity in development teams to incorporate
societal biases embedded in the data itself. Specific
a wider range of perspectives, and conducting
instances of bias were evident in the literature, such as
community-based testing to evaluate AI systems in
dermatological AI systems that may misdiagnose skin
real-world settings and identify potential disparities.
conditions in individ- uals with darker skin tones due to
Addressing these challenges requires a fundamen-
underrepresentation in training datasets. Similarly,
tal shift in how we integrate AI into healthcare systems.
algorithms prioritizing cost- effectiveness over
This necessitates international collaboration to establish
individual needs could inadvertently disadvantage
global standards and practices that promote inclusiv-
patients from marginalized communities who often
ity, transparency, and fairness in AI development and
require more complex care [20–26]. These biases can
deployment. Robust ethical frameworks are needed to
have far-reaching consequences, impacting diagnostic
guide responsible AI use, ensuring patient autonomy,
accuracy, treatment decisions, and resource allocation,
data privacy, and equitable access to care. Continuous
ultimately affecting patient outcomes and exacerbating
monitoring and evaluation mechanisms are crucial to
health disparities.
identify and address emerging biases and ethical con-
cerns in evolving AI systems.
Marko et al. BMC Medical Informatics and Decision (2025) Page 11 of
Making 25:57 19

Table 2 Comparison based on different parameters


Category and parameter Key findings and considerations
Bias
Race and ethnicity bias Bias in AI-based medical imaging for light-skinned individuals.
Gender bias Health disparities for women in ethnic minority groups.
Geographical disparities Amplification of bias in retrospective studies.
Clinical trial bias Minimal representation of certain populations, which raises efficacy concerns.
Socioeconomic bias The undervaluation of healthcare costs for certain demographic groups, which affects algorithms.
Algorithmic bias in various Biases in algorithms used to determine kidney function and perform facial recognition.
applications
Federated learning as a solution Potential accessibility issues for small institutions and corporate dominance.
Regulations and policy
International norms Recommendations from the WHO, the USA’s AI Bill of Rights, and the European Commission.
Fairness and health inequities Need for strong regulatory standards and guidelines to address potential health inequities.
Dedication to diversity Legislative protections for the use of AI to address rare diseases in line with the UN’s Sustainable
Develop- ment Goals.
Uniform legal frameworks A lack of such frameworks, which entails compliance challenges, thus highlighting the necessity of
state supervision.
Privacy
Challenges pertaining to data transfer The simplification of data transfer through digitalization, which nevertheless introduces
challenges related to security and auditing.
Ethical principles Inadequate exploration of the influence of ethical principles on AI models.
Need for AI regulations The necessity of AI regulation in healthcare, especially with regard to unintended causal patterns.
Inclusion
Balanced datasets Essential for model quality and the avoidance of errors.
Community engagement Essential for avoiding biases; inclusivity is a moral and strategic imperative.
Patient-centric AI The need for AI to incorporate gender, sex, and socioeconomic factors comprehensively.
Equity
Dual effects of AI The fact that AI may either promote or impede health equity.
Addressingvulnerable populations An emphasis on the needs of vulnerable populations through equitable data management and
testing methodologies.
NLP in patient-centric care Identification of NLP as a powerful tool for patient-centric care, which can promote equity.
Validation
Challenges pertaining to clinical Challenges that highlight the need for real-world evidence and comprehensive testing
validation methodologies.
Importance of empirical evidence A regulatory emphasis on empirical evidence to support the safety, efficacy, and equity of the use
of AI in healthcare.
Nuanced model performance The necessity of validation in diverse domains.
Global impact
Regional disparities in adoption The fact that developed countries exhibit advanced integration, thus leading to variations in
healthcare outcomes.
Variations in outcomes and efficacy Geographical disparities, which result in varying outcomes and context-dependent effectiveness.
Ethical considerations Essential for inclusive AI deployment.
Public perceptions
Awareness and education The positive influence of increased awareness on perceptions, especially among individuals with
diverse backgrounds.
Trust-building measures Transparent communication and community engagement, which contribute to the
establishment of trust.
Cultural sensitivity in AI design A positive influence on public trust by respecting diverse norms and values.
Community engagement Community engagement in decision-making processes, which establishes trust.
Ethical considerations and Public trust, which is influenced by ethical frameworks and clear accountability measures in the
accountability context of AI applications.
Addressing bias and fairness Efforts to enhance fairness and equity, which resonate positively with diverse populations.
Intersectionality in trust dynamics The recognition of intersectionality in trust dynamics, including the fact that trust is influenced by
various factors such as race, gender, and socioeconomic status.
Marko et al. BMC Medical Informatics and Decision (2025) Page 12 of
Making 25:57 19

Beyond the need for further research, this review


employ more comprehensive search strategies within
points to systemic issues in AI integration into health-
Google Scholar or consider manual screening to ensure
care. AI has often emphasized pre-existing disparities,
a more representative sample of relevant literature. The
particularly around areas like racism, sexism, and
review reveals significant disparities in AI adoption and
socio- economic biases. These biases are
implementation, in which context developed countries
manifestations of more far-reaching societally rooted
have outpaced developing regions. This geographical
problems that AI has unwittingly reflected and
imbalance limits the generalizability of the findings of
amplified. Another major chal- lenge is that AI further
this review, which may overlook the unique challenges
expands existing disparities in access to digital
associated with low-resource settings, particularly
healthcare, particularly for marginalized communities
given the varying levels of technological infrastructure
who may lack digital literacy or access to adequate
across regions. Furthermore, the methodological rigor
infrastructure [16–18]. This digital divide can deepen
of the included studies was inconsistent. Some studies
health inequities and must be addressed through
lacked robust validation, transparent reporting, and
targeted investments and inclusive design.
detailed methodological descriptions, thus impacting
The ethical issues identified in this review are dire
the overall reliability and reproducibility of their
and multifaceted. Algorithmic bias in healthcare is more
findings. The pre- dominance of cross-sectional studies,
than a technical flaw; it is an ethical failure with real
although they pro- vided snapshots of the impact of AI
conse- quences for health outcomes, often
in this context, fails to capture long-term outcomes and
disproportionately impacting minorities [39]. Biased
the evolving nature of AI technologies in healthcare.
datasets and pri- vacy concerns further compound
Although ethical con- siderations were addressed, a
these issues. Existing regimes governing AI in health
deeper exploration of the principles guiding the
need urgent rectifica- tion, with a necessity for more
development and deployment of AI is needed. Issues
robust, enforceable global standards.
pertaining to privacy, patient auton- omy, and
commercial interests require a thorough inves- tigation
Limitations
that can establish robust ethical frameworks for
While this review thoroughly explores the integration
responsible AI use. Translating research into practice
of AI into healthcare, several limitations must be noted
remains challenging, and many studies have
regarding the interpretation of its findings; these limi-
highlighted the difficulties associated with scaling and
tations also highlight directions for future research.
ensuring reproducibility in clinical settings. This
The search terms used, while broad, may not have cap-
limitation high- lights the need for practical, adaptable
tured the full spectrum of relevant literature. Focusing
AI solutions that can be seamlessly integrated into
on descriptors like “impact” and “discrimination” might
existing healthcare systems. While public perceptions
have missed studies that used alternative terminol-
were mentioned, a more nuanced analysis of the
ogy to address similar concepts (e.g., “fairness,” “equity,”
barriers to acceptance and the roles of education and
“justice”). Future reviews could incorporate a wider
trust-building in this context is warranted.
range of search terms to ensure a more comprehensive
Understanding diverse perspectives on AI and the
and nuanced understanding of the ethical implications
factors that influence the acceptance of this tech- nology
of AI in healthcare. Additionally, the overlap in mean-
is crucial with respect to efforts to promote public trust and
ing among terms like “diverse populations” and “under-
engagement. Finally, the lack of transparency exhibited
represented groups” might have led to the inclusion of
by some studies in terms of methodologies and
some repetitive articles, potentially skewing the analysis.
potential conflicts of interest raises concerns regarding
Future reviews could employ more precise definitions
the credibility and impartiality of their findings. Clear
and inclusion/exclusion criteria to mitigate this issue.
reporting of funding sources, biases, and methodologi-
This review focused on English-language publications,
cal details is essential for the establishment of trust in
potentially excluding valuable research published in
research on AI and its applications.
other languages. This language bias could limit the gener-
alizability and comprehensiveness of the findings. Future Conclusions
research should strive to include non-English publica-
This review highlights a crucial reality, as we integrate
tions, perhaps through collaboration with international
AI into the intricate fabric of healthcare, we must pro-
researchers or by utilizing translation services. While
ceed with caution, guided by ethical considerations and
Google Scholar was included as a source, the extrac-
a steadfast commitment to patient well-being. Privacy,
tion process was not exhaustive due to limitations in the
equity, and inclusion are not mere buzzwords; they are
API and the sheer volume of results. Relying on the first
essential principles that must shape the development
420 articles from a potential pool of over 16,000 might
and application of AI. AI cannot function in isolation,
have introduced selection bias. Future research could
oblivi- ous to the diverse needs of society; it must be
inclusive and representative of all, or risk
exacerbating the very
Marko et al. BMC Medical Informatics and Decision (2025) Page 13 of
Making 25:57 19

healthcare disparities it aims to eliminate. We stand on Declarations


the brink of a healthcare revolution, where AI’s trans-
formative potential can only be fully realized when it is Ethics approval and consent to participate
Not applicable.
deeply rooted in ethics and human values. From safe-
guarding privacy to combating algorithmic bias, it is Consent for publication
evident that a collaborative effort is required: ethicists, Not applicable.
clinicians, policymakers, and technologists must unite to Competing interests
navigate these complex and uncharted waters. The authors declare no competing interests.
The path ahead is fraught with challenges. Scholars
must develop innovative methods that balance privacy Received: 10 May 2024 / Accepted: 20 January 2025
with fairness, while regulatory bodies worldwide must
keep pace with the rapid advancements of AI. The true
promise of AI lies in its ability to be universally acces-
References
sible, ensuring that its benefits reach everyone, regard- 1. Shi F, Wang J, Shi J, Wu Z, Wang Q, Tang Z, et al. Review of artificial
less of economic status. As we advance, we must not intel- ligence techniques in imaging data acquisition, segmentation,
shy away from the difficult questions. We need to and diagnosis for COVID-19. IEEE Rev Biomed Eng. 2021;14:4–15.
2. Surya L. How government can use AI and ML to identify spreading
engage more deeply with the ethical and legal infectious diseases. Int J Creat Res Thoughts. 2018;6:899–902.
complexities that AI introduces, ensuring that its 3. Post B, Badea C, Faisal A, Brett SJ. Breaking bad news in the era of
development and deploy- ment remain transparent and artificial intelligence and algorithmic medicine: an exploration of
disclosure and its ethical justification using the hedonic calculus. AI
accountable. The stakes are high, but the potential Ethics. 2022;1–14. https://d oi.org/10.1007/s43681-022-00230-z.1-14.
rewards a future where healthcare is equitable, 4. Crowell R. Why AI’s diversity crisis matters, and how to tackle it.
accessible, and powered by intelligent tech- nology are Nature. 2023. https://doi.org/10.1038/d41586-023-01689-4.
5. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring
extraordinary. fair- ness in machine learning to advance health equity.
Abbreviations Ann Intern Med. 2018;169:866–72.
6. Chen IY, Szolovits P, Ghassemi M. Can AI help reduce disparities
AI Artificial intelligence
in general medical and mental health care? AMA J Ethics.
ML Machine learning
2019;21:E167–79.
PRISMA Preferred Reporting Items for Systematic Reviews
7. Zhang H, Lu AX, Abdalla M, McDermott M, Ghassemi M. Hurtful
and Meta-Analyses
words: quantifying biases in clinical contextual word embeddings.
SPIDER Sample, phenomenon of interest, design, evaluation,
In: CHIL’20: proceedings of the ACM conference on health,
research type RDs Rare diseases
inference, and learning. New York, NY, USA: ACM. 2020;110– 20.
NLP Natural language processing
8. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting
racial bias in an algorithm used to manage the health of
Supplementary Information populations. Science. 2019;366:447–53.
9. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics
The online version contains supplementary material available at
guidelines. Nat Mach Intell. 2019;1:389–99.
https://doi.or g/10.1186/s12911-025-02884-1.
10. Roche C, Wall PJ, Lewis D. Ethics and diversity in artificial intelligence
policies, strategies and initiatives. AI Ethics. 2022;1–21.
Supplementary Material 1 https://doi.org/10.1007/s43681
Supplementary Material 2 -022-00218-9.1-21.
11. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD,
et al. The PRISMA 2020 statement: an updated guideline for
Acknowledgements reporting systematic reviews. Syst Rev. 2021;10:89.
Not applicable. 12. Lumivero. NVivo (Version 13, 2020 R1). 2020. www.lumivero.com
13. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the
Author contributions framework method for the analysis of qualitative data in multi-
Author A: Assumed responsibility for the conceptualization and disciplinary health research. BMC Med Res Methodol.
design of the study, literature search, as well as the initial drafting of 2013;13:117.
the manuscript. Authors B and C: Provided overarching supervision 14. Bozada T Jr., Borden J, Workman J, Del Cid M, Malinowski J,
and guidelines. They undertook the analysis of the data and executed Luechtefeld T. Sysrev: a FAIR platform for data curation and
critical revisions of the manuscript, ensuring substantial intellectual systematic evidence review. Front Artif Intell. 2021;4:685298.
content. All individuals listed as authors have made significant 15. Jackson K, Bazeley P. Qualitative data analysis with NVivo. London:
contributions to the research and manuscript preparation and have SAGE Publications Ltd.; 2019.
reviewed and approved the final version of the manuscript. 16. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias:
implica- tions for health systems. J Glob Health. 2019;9:010318.
Funding 17. Grote T, Keeling G. On algorithmic fairness in medical practice.
This research did not receive specific grants from any funding agency Camb Q Healthc Ethics. 2022;31:83–94.
in the public, commercial, or not-for-profit sectors. 18. Xu J, Xiao Y, Wang WH, Ning Y, Shenkman EA, Bian J, et al. Algorithmic
fairness in computational medicine. EBioMedicine. 2022;84:104250.
Data availability 19. Bayne J, Garry J, Albert MA. Brief review: racial and ethnic
The datasets used and analysed during the current study are available disparities in cardiovascular care with a focus on congenital heart
in the Sysrev repository. disease and precision medicine. Curr Atheroscler Rep.
https://www.sysrev.com/register/8Qk17RzkH8NgYayFnoox 2023;25:189–95.
C1yBNPdK0JZR. 20. Adleberg J, Wardeh A, Doo FX, Marinelli B, Cook TS, Mendelson DS,
et al. Predicting patient demographics from chest radiographs with
deep learning. J Am Coll Radiol. 2022;19:1151–61.
21. Kaushal A, Altman R, Langlotz C. Health care AI systems are biased.
2020. http s://fully-human.org/wp-content/uploads/2021/01/Health-Care-
Marko et al. BMC Medical Informatics and Decision (2025) Page 14 of
AI-Systems-A
Making re-Biased.pdf 25:57 19
Marko et al. BMC Medical Informatics and Decision (2025) Page 15 of
Making 25:57 19

22. Szankin M, Kwasniewska A. Can AI see bias in X-ray images? Int J and technologies. SCITEPRESS - Science and Technology
Netw Dyn Intell. 2022;1:48–64. Publications. 2021;593-8.
23. Ward A, Sarraju A, Chung S, Li J, Harrington R, Heidenreich P, et al. 44. Fong N, Langnas E, Law T, Reddy M, Lipnick M, Pirracchio R.
Machine learning and atherosclerotic cardiovascular disease risk Availability of information needed to evaluate algorithmic fairness
prediction in a multi- ethnic population. NPJ Digit Med. 2020;3:125. — a systematic review of publicly accessible critical care
24. Aggarwal N, Ahmed M, Basu S, Curtin JJ, Evans BJ, Matheny ME, et al. databases. Anaesth Crit Care Pain Med. 2023;42:101248.
Advanc- ing artificial intelligence in health settings outside the 45. Owoyemi A, Owoyemi J, Osiyemi A, Boyd A. Artificial intelligence for
hospital and clinic. NAM Perspect. 2020. health- care in Africa. Front Digit Health. 2020;2:6.
https://doi.org/10.31478/202011f.
25. Nyariro M, Emami E, Abbasgholizadeh Rahimi S. Integrating equity,
diversity, and inclusion throughout the lifecycle of artificial
intelligence in health. In: 13th augmented human international
conference. New York, NY, USA: ACM. 2022;1–4.
26. Kormilitzin A, Tomasev N, McKee KR, Joyce DW. A participatory
initiative to include LGBT + voices in AI for mental health. Nat
Med. 2023;29:10–1.
27. Dankwa-Mullan I, Scheufele EL, Matheny ME, Quintana Y,
Chapman WW, Jackson G, et al. A proposed framework on
integrating health equity and racial justice into the artificial
intelligence development lifecycle. J Health Care Poor
Underserved. 2021;32:300–17.
28. Espinoza J, Sikder AT, Dickhoner J, Lee T. Assessing health data
security risks in global health partnerships: development of a
conceptual framework. JMIR Form Res. 2021;5:e25833.
29. Geneviève LD, Martani A, Wangmo T, Elger BS. Precision public
health and structural racism in the United States: promoting health
equity in the COVID- 19 pandemic response. JMIR Public Health
Surveill. 2022;8:e33277.
30. Tsai TC, Arik S, Jacobson BH, Yoon J, Yoder N, Sava D, et al. Algorithmic
fairness in pandemic forecasting: lessons from COVID-19. NPJ Digit
Med. 2022;5:59.
31. Wylezinski LS, Harris CR, Heiser CN, Gray JD, Spurlock CF. Influence of
social determinants of health and county vaccination rates on
machine learning models to predict COVID-19 case growth in
Tennessee. BMJ Health Care Inf. 2021;28:e100439.
32. McBride B, O’Neil J, Nguyen PC, Linh DT, Trinh HT, Vu NC, et al.
Adapting and scaling a digital health intervention to improve
maternal and child health among ethnic minority women in
Vietnam amid the COVID-19 context: protocol for the dMOM
project. JMIR Res Protoc. 2023;12:e44720.
33. Litchfield I, Shukla D, Greenfield S. Impact of COVID-19 on the digital
divide: a rapid review. BMJ Open. 2021;11:e053440.
34. Liu J, Cheng L, Sarker A, Yan L, Alo RA. DeepTrack: an ML-based
approach to health disparity identification and determinant
tracking for improving
pandemic health care. In: 2021 IEEE international conference on big data
(Big Data). Orlando, FL, USA: IEEE. 2021;1692-8.
35. Fujioka JK, Budhwani S, Thomas-Jacques T, De Vera K, Challa P, Fuller
K, et al. Challenges and strategies for promoting health equity in
virtual care: proto- col for a scoping review of reviews. JMIR Res
Protoc. 2020;9:e22847.
36. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in
big data and AI for health care: a call for open science. Patterns.
2021;2:100347.
37. Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does AI
stand for augmenting inequality in the era of covid-19 healthcare? BMJ.
2021;372:n304.
38. McCradden MD, Anderson JA, Stephenson A, Drysdale E, Erdman E,
Golden- berg L. A research ethics framework for the clinical
translation of healthcare machine learning. Am J Bioeth. 2022;22:8–
22.
39. Dixon BE, Holmes JH. Special section on inclusive digital health:
notable papers on addressing bias, equity, and literacy to strengthen
health systems. Yearb Med Inf. 2022;31:100–4.
40. Gaskins N. Interrogating algorithmic bias: from speculative fiction to
libera- tory design. TechTrends. 2023;67:417–25.
41. Martinez-Martin N. What are important ethical implications of using
facial recognition technology in health care? AMA J Ethics.
2019;21:E180–7.
42. Corti C, Cobanaj M, Dee EC, Criscitiello C, Tolaney SM, Celi LA, et al.
Artificial intelligence in cancer research and precision medicine:
applications, limita- tions and priorities to drive transformation in
the delivery of equitable and unbiased care. Cancer Treat Rev.
2023;112:102498.
43. Topaloglu M, Morrell E, Topaloglu U. Federated learning in healthcare
is the future, but the problems are contemporary. In: Proceedings
of the 17th international conference on web information systems
Marko et al. BMC Medical Informatics and Decision (2025) Page 16 of
46. Verma S, Singh G, Mate A, Verma P, Gorantla S, Madhiwalla N et al.25:57
Making 19
Deployed SAHELI: field optimization of intelligent RMAB for
maternal and child care. 2023.
https://research.google/pubs/pub51839/
47. Afrose S, Song W, Nemeroff CB, Lu C, Yao DD. Subpopulation-
specific machine learning prognosis for underrepresented
patients with double prioritized bias correction. Commun
Med. 2022;2:111.
48. Zou J, Schiebinger L. Ensuring that biomedical AI benefits
diverse popula- tions. EBioMedicine. 2021;67:103358.
49. Park JI, Bozkurt S, Park JW, Lee S. Evaluation of race/ethnicity-specific
survival machine learning models for hispanic and black patients
with breast cancer. BMJ Health Care Inf. 2023;30:e100666.
50. Chen IY, Pierson E, Rose S, Joshi S, Ferryman K, Ghassemi M. Ethical
machine learning in healthcare. Annu Rev Biomed Data Sci.
2021;4:123–44.
51. Buslón N, Racionero-Plaza S, Cortés A. Sex and gender inequality
in precision medicine: socioeconomic determinants of health. In:
Cirillo D, Solarz SC, Guney E, editors. Sex and gender bias in
technology and artificial intelligence: biomedicine and healthcare
applications. London, UK: Academic Press Inc.; 2022. pp. 35–54.
52. Vokinger KN, Feuerriegel S, Kesselheim AS. Mitigating bias in
machine learn- ing for medicine. Commun Med. 2021;1:25.
53. McComb M, Ramanathan M. Generalized pharmacometric
modeling, a novel paradigm for integrating machine learning
algorithms: a case study of metabolomic biomarkers. Clin
Pharmacol Ther. 2020;107:1343–51.
54. Okolo CT. Optimizing human-centered AI for healthcare in the
Global South. Patterns. 2022;3:100421.
55. Capelli G, Verdi D, Frigerio I, Rashidian N, Ficorilli A, Grasso V, et al.
White paper: ethics and trustworthiness of artificial intelligence in
clinical surgery. Artif Intell Surg. 2023;3:111–22.
56. Coppola F, Faggioni L, Gabelloni M, De Vietro F, Mendola V, Cattabriga
A, et al. Human, all too human? An all-around appraisal of the
artificial intelligence revolution in medical imaging. Front Psychol.
2021;12:710982.
57. Takshi S. Artificial intelligence in personalized medicine. J Law
Health. 2021;34:215.
58. World Health Organization. Ethics and governance of artificial
intelligence for health. Geneva, Switzerland: World Health
Organization; 2021.
59. Curchoe CL. Unlock the algorithms: regulation of adaptive
algorithms in reproduction. Fertil Steril. 2023;120:38–43.
60. Da Silva M, Flood CM, Goldenberg A, Singh D. Regulating the
safety of health- related artificial intelligence. Healthc Policy.
2022;17:63–77.
61. Shen N. AI regulation in health care: how Washington State
can conquer the New Territory of AI regulation. Seattle J
Technol Environ Innov Law. 2024;13:Article5.
62. Nittari G, Khuman R, Baldoni S, Pallotta G, Battineni G, Sirignano
A, et al. Telemedicine practice: review of the current ethical and
legal challenges. Telemed J E Health. 2020;26:1427–37.
63. Schwalbe N, Wahl B. Artificial intelligence and the future of global
health. Lancet. 2020;395:1579–86.
64. Seastedt KP, Schwab P, O’Brien Z, Wakida E, Herrera K, Marcelo
PGF, et al. Global healthcare fairness: we should be sharing more,
not less, data. PLOS Digit Health. 2022;1:e0000102.
65. Mirbabaie M, Hofeditz L, Frick NRJ, Stieglitz S. Artificial intelligence
in hospi- tals: providing a status quo of ethical considerations in
academia to guide future research. AI Soc. 2022;37:1361–82.
66. Hasani N, Farhadi F, Morris MA, Nikpanah M, Rhamim A, Xu Y, et al.
Artificial intelligence in medical imaging and its impact on the
rare disease commu- nity: threats, challenges and opportunities.
PET Clin. 2022;17:13–29.
67. Kong JD, Akpudo UE, Effoduh JO, Bragazzi NL. Leveraging
responsible, explainable, and local artificial intelligence solutions for
clinical public health in the global south. Healthcare. 2023;11:457.
68. Pun FW, Ozerov IV. and A. J. T. i. P. S. Zhavoronkov, AI-powered
therapeutic target discovery, 2023.
69. Laroia G, Horne BD, Esplin S, et al. A unified health algorithm that
teaches itself to improve health outcomes for every individual: how
far into the future is it? Digit Health. 2022;8.
https://doi.org/10.1177/20552076221074126.
70. Erdmann A, Rehmann-Sutter C, Bozzaro C. Clinicians’ and
researchers’ views on precision medicine in chronic inflammation:
practices, benefits and chal- lenges. J Pers Med. 2022;12:574.
71. Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in
healthcare: a systematic review. Soc Sci Med. 2022;296:114782.
72. Ng MY, Kapur S, Blizinsky KD, Hernandez-Boussard T. The AI
life cycle: a holistic approach to creating ethical AI for health
decisions. Nat Med. 2022;28:2247–9.
Marko et al. BMC Medical Informatics and Decision (2025) Page 17 of
Making 25:57 19

73. Bozkurt S, Cahan EM, Seneviratne MG, Sun R, Lossio-Ventura JA, 96. Clark CR, Wilkins CH, Rodriguez JA, et al. Health care equity in the
Ioannidis JPA, et al. Reporting of demographic data and use of advanced analytics and artificial intelligence technologies in
representativeness in machine learning models using electronic primary care. J Gen Intern Med. 2021;36(10):3188–93.
health records. J Am Med Inf Assoc. 2020;27:1878–84. 97. Noseworthy PA, Attia ZI, Brewer LC, Hayes SN, Yao X, Kapa S, et al.
74. Puyol-Antón E, Ruijsink B, Mariscal Harana J, Piechnik SK, Neubauer Assessing and mitigating bias in medical artificial intelligence: the
S, Petersen SE, et al. Fairness in cardiac magnetic resonance imaging: effects of race and
assessing sex and racial bias in deep learning-based segmentation.
Front Cardiovasc Med. 2022;9:859310.
75. Tang L, Li J, Fantus S. Medical artificial intelligence ethics: a systematic
review of empirical studies. Digit Health.
2023;9:20552076231186064.
76. Straw I. The automation of bias in medical artificial intelligence (AI):
decoding the past to create a better future. Artif Intell Med.
2020;110:101965.
77. Bragazzi NL, Khamisy-Farah R, Converti M. Ensuring equitable,
inclusive and meaningful gender identity- and sexual orientation-
related data collection in the healthcare sector: insights from a critical,
pragmatic systematic review of the literature. Int Rev Psychiatry.
2022;34:282–91.
78. Correa R, Shaan M, Trivedi H, Patel B, Celi LAG, Gichoya JW, et al. A
systematic review of ‘fair’ AI model development for image
classification and prediction. J Med Biol Eng. 2022;42:816–27.
79. Martinez-Martin N, Luo Z, Kaushal A, Adeli E, Haque A, Kelly SS, et al.
Ethical issues in using ambient intelligence in health-care
settings. Lancet Digit Health. 2021;3:e115–23.
80. Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, et al.
Algorith- mic fairness in artificial intelligence for medicine and
healthcare. Nat Biomed Eng. 2023;7:719–42.
81. Ellahham S, Ellahham N, Simsekler MCE. Application of artificial
intelligence in the health care safety context: opportunities and
challenges. Am J Med Qual. 2019;35:341–8.
82. Sisk BA, Antes AL, Burrous S, DuBois JM. Parental attitudes toward
artificial intelligence-driven precision medicine technologies in
pediatric healthcare. Children. 2020;7:145.
83. Cheng VWS, Piper SE, Ottavio A, Davenport TA, Hickie IB.
Recommendations for designing health information technologies
for mental health drawn from self-determination theory and co-
design with culturally diverse populations: template analysis. J Med
Internet Res. 2021;23:e23502.
84. Bauer GR, Lizotte DJ. Artificial intelligence, intersectionality, and the
future of public health. Am J Public Health. 2021;111:98–100.
85. Trewin S, Basson S, Muller M, Branham S, Treviranus J, Gruen D, et al.
Consider- ations for AI fairness for people with disabilities. AI
Matters. 2019;5:40–63.
86. Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial
intelligence for healthcare: a framework for AI developers. AI Ethics.
2022;3:223–40.
87. Ostherr K. Artificial intelligence and medical humanities. J Med
Humanit. 2022;43:211–32.
88. Kraft SA, Cho MK, Gillespie K, Halley M, Varsava N, Ormond KE, et al.
Beyond consent: building trusting relationships with diverse
populations in precision medicine research. Am J Bioeth.
2018;18:3–20.
89. Istasy P, Lee WS, Iansavichene A, et al. The impact of artificial
intelli- gence on health equity in oncology: scoping review. J
Med Internet Res. 2022;24(11):e39748.
90. London AJ. Artificial intelligence in medicine: overcoming or
recapitulating structural challenges to improving patient care? Cell
Rep Med. 2022. https:// doi.org/10.1016/j.xcrm.2022.100622.
91. Farmer N, Osei Baah FO, Williams F, et al. Use of a Community
Advisory Board to build equitable algorithms for participation in
clinical trials: a protocol paper for HoPeNET. BMJ Health Care Inf.
2022;29(1). https://doi.org/10.1136/b mjhci-2021-100453.
92. Song Z, Johnston RM, Ng CP. Equitable Healthcare Access
during the pandemic: the impact of Digital divide and other
SocioDemographic and systemic factors. ARAIC. 2021;4(1):19–
33.
93. Ibeneme S, Okeibunor J, Muneene D, et al. Data revolution, health
status transformation and the role of artificial intelligence for health
and pandemic preparedness in the African context. BMC Proc.
2021;15(suppl 15):22.
94. Koutsouleris N, Hauser TU, Skvortsova V, De Choudhury M. From
promise to practice: towards the realisation of AI-informed mental
health care. Lancet Digit Health. 2022;4:e829–40.
95. Holzmeyer C. Beyond ‘AI for Social Good’(AI4SG): social
transformations—not tech-fixes—for health equity. Interdiscip Sci
Rev. 2021;46(1–2):94–125.
Marko et al. BMC Medical Informatics and Decision (2025) Page 18 of
ethnicity on a deep learning model for ECG analysis. Circ Arrhythmia
Making 25:57 120. Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, 19Alshaya AI,
Electro- physiol. 2020;13:e007988. Almohareb SN, et al. Revolutionizing healthcare: the role of artificial
98. Seyyed-Kalantari L, Zhang H, McDermott MBA, Chen IY, Ghassemi intelligence in clini- cal practice. BMC Med Educ. 2023;23:689.
M. Underdiagnosis bias of artificial intelligence algorithms applied 121. Aminabee S. The future of Healthcare and patient-centric care:
to chest radiographs in under-served patient populations. Nat Digital inno- vations, trends, and predictions. Emerging Technologies
Med. 2021;27:2176–82. for Health Literacy and Medical Practice. IGI Global; 2024. pp. 240–
99. Singh V, Sinha S, Norris K, Nicholas SB. Racial disparities in the effect 62.
of inflam- mation on the prediction of albuminuria in patients with
the metabolic syndrome using machine learning. J Am Soc
Nephrol. 2018;29:1059.
100. Sargent SL. AI bias in healthcare: using impactpro as a case
study for healthcare practitioners’ duties to engage in anti-bias
measures. Can J Bioeth. 2021;4:112–6.
101. Quinn TP, Jacobs S, Senadeera M, Le V, Coghlan S. The three
ghosts of medical AI: can the black-box present deliver? Artif Intell
Med. 2022;124:102158.
102. Stai B, Heller N, McSweeney S, Rickman J, Blake P, Edgerton Z,
et al. PD23-03 public perceptions of AI in medicine. J Urol. 2020;203.
https://doi.org/10.1097
/ju.0000000000000873.03.
103. Kawamleh S. Against explainability requirements for ethical
artificial intel- ligence in health care. AI Ethics. 2022;3:901–16.
104. Jackson BR, Ye Y, Crawford JM, Becich MJ, Roy S, Botkin JR, et al.
The ethics of artificial intelligence in pathology and laboratory
medicine: principles and practice. Acad Pathol.
2021;8:2374289521990784.
105. Kumar P, Chauhan S, Awasthi LK. Artificial intelligence in
healthcare: review, ethics, trust challenges & future research
directions. Eng Appl Artif Intell. 2023;120:105894.
106. Garbin C, Marques O. Assessing methods and tools to
improve reporting, increase transparency, and reduce failures in
machine learning applications in health care. Radiol Artif Intell.
2022;4:e210127.
107. Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D.
Artificial intel- ligence in mammographic phenotyping of breast
cancer risk: a narrative review. Breast Cancer Res.
2022;24:14.
108. Goankar B, Cook K, Macyszyn L. Ethical issues arising due to bias
in training
A.I. algorithms in healthcare and data sharing as a potential solution.
AI Ethics
J. 2020;1. https://doi.org/10.47289/aiej20200916.
109. Price W, Nicholson I, Medical. AI and contextual bias. 2019.
https://heinonline. org/hol-cgi-bin/get_pdf.cgi?
handle=hein.journals/hjlt33§ion=6
110. Ismail A, Kumar N. AI in global health: the view from the
front lines. In: Proceedings of the 2021 CHI conference on
human factors in computing systems (CHI ‘21). New York, NY,
USA: ACM. 2021;1–21.
111. Hague DC. Benefits, pitfalls, and potential bias in health care
AI. N C Med J. 2019;80:219–23.
112. Lee MK, Rich K. Who is included in human perceptions of AI?
trust and per- ceived fairness around healthcare AI and cultural
mistrust. In: Proceedings of the 2021 CHI Conference on Human
Factors in Computing Systems (CHI ‘21). Association for Computing
Machinery, New York, NY, USA, Article 138, 1–14. h
ttps://doi.org/10.1145/3411764.3445570.
113. Henderson B, Flood C, Scassa T. Artificial intelligence in
Canadian healthcare: will the law protect us from algorithmic bias
resulting in discrimination? Ottawa Faculty of Law Working Paper
No. 2021-24. https://ssrn.com/abstract
=39519452024
114. Goisauf M, Abadía MC. Ethics of AI in radiology: a review of
ethical and soci- etal implications. Front Big Data.
2022;5:850383.
115. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and
medicine. Nat Med. 2022;28:31–8.
116. Schillinger D, Balyan R, Crossley S, McNamara D, Karter A.
Validity of a compu- tational linguistics-derived automated health
literacy measure across race/ ethnicity: findings from the
ECLIPPSE project. J Health Care Poor Under- served.
2021;32:347–65.
117. Quinn RA. Artificial intelligence and the role of ethics.
Stat J IAOS. 2021;37:75–7.
118. Graves M, Ratti E. Microethics for healthcare data science:
attention to capa- bilities in sociotechnical systems. Future Sci
Ethics. 2021;6:64–73.
119. Albahri AS, Duhaim AM, Fadhel MA, Alnoor A, Baqer NS,
Alzubaidi L, et al. A systematic review of trustworthy and
explainable artificial intelligence in healthcare: assessment of
quality, bias risk, and data fusion. Inf Fusion. 2023;96:156–91.
Marko et al. BMC Medical Informatics and Decision (2025) Page 19 of
Making 25:57 19

122. Alowais SA et al. Revolutionizing healthcare: the role of artificial


136. Ben-Gal HC. Artificial intelligence (AI) acceptance in primary care
intelligence in clinical practice. 2023;23(1):689.
during the coronavirus pandemic: what is the role of patients’
123. Mannuru NR et al. Artificial intelligence in developing countries:
gender, age and health awareness? A two-phase pilot study. Front
The impact of generative artificial intelligence (AI) technologies
Public Health. 2023;10:931225.
for development. 2023;02666669231200628.
137. Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the
124. Uche-Anya E, Anyane-Yeboa A, Berzin TM, Ghassemi M, May FP.
ethical enigma: artificial intelligence in healthcare. Cureus.
Artificial intel- ligence in gastroenterology and hepatology: how to
2023;15:e43262.
advance clinical practice while ensuring health equity. Gut.
138. Bao L, Krause NM, Calice MN, Scheufele DA, Wirz CD, Brossard D,
2022;71:1909–15.
et al. Whose AI? How different publics think about AI and its social
125. Sawhney R, Malik A, Sharma S, Narayan VJDAJ. A comparative
impacts. Comput Hum Behav. 2022;130:107182.
assessment of artificial intelligence models used for early prediction
139. Celiktutan B, Cadario R, Morewedge CK. People see more of
and evaluation of chronic kidney disease. 2023;6:100169.
their biases in algorithms. Proc Natl Acad Sci USA.
126. Milam M, Koo CJCR. The current status and future of FDA-
2024;121:e2317602121.
approved artificial intelligence tools. Chest Radiol United States
140. Robinson SC. Trust, transparency, and openness: how inclusion
vol. 2023;78(2):115–22.
of cultural values shapes nordic national public policy strategies for
127. Mannuru NR, Shahriar S, Teel ZA, Wang T, Lund BD, Tijani S, et
artificial intelligence (AI). Technol Soc. 2020;63:101421.
al. Artificial intelligence in developing countries: the impact of
141. Cachat-Rosset G, Klarsfeld A. Diversity, equity, and inclusion in
generative artificial intel- ligence (AI) technologies for development. Inf
artificial intel- ligence: an evaluation of guidelines. Appl Artif Intell.
Dev. 2023. https://doi.org/10.1 177/02666669231200628.
2023;37:2176618.
128. Witkowski K, Okhai R, Neely SR. Public perceptions of artificial
142. Johnson SLJ. AI, machine learning, and ethics in health care. J
intelligence in healthcare: ethical concerns and opportunities for
Leg Med. 2019;39(4):427–41.
patient-centered care. BMC Med Ethics. 2024;25:74.
143. Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and
129. Sides T, Kbaier D, Farrell T, Third A. Exploring the potential of
social consider- ations of AI-based medical decision-support tools: a
artificial intel- ligence in primary care: insights from stakeholders’
scoping review. Int J Med Inf. 2022;161:104738.
perspectives. 2023; https:/
144. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC,
/doi.org/10.20944/preprints202311.0995.v1
et al. Arti- ficial intelligence for good health: a scoping review of
130. Robles P, Mallinson DJ. Artificial intelligence technology, public
the ethics literature. BMC Med Ethics. 2021;22:14.
trust, and effective governance. Rev Policy Res. 2023;1–18.
145. Huang M, Ki E-J. Examining the effect of anthropomorphic
https://doi.org/10.1111/ropr. 12555.1-18.
design cues on healthcare chatbots acceptance and organization-
131. Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S,
public relationships: trust in a warm human vs. a competent
et al. Atti- tudes and perception of artificial intelligence in healthcare:
machine. Int J Hum Comput Interact. 2023.
a cross-sectional survey among patients. Digit Health.
https://doi.org/10.1080/10447318.2023.2290378.1-13.
2022;8:20552076221116772.
146. Palaniappan K, Lin EYT, Vogel S. Global Regulatory frameworks for
132. Dlugatch R, Georgieva A, Kerasidou A. Trustworthy artificial
the Use of Artificial Intelligence (AI) in the Healthcare services Sector.
intelligence and ethical design: public perceptions of
Healthc (Basel). Feb. 2024;12(5):562.
trustworthiness of an AI-based decision- support tool in the context
https://doi.org/10.3390/healthcare12050562.
of intrapartum care. BMC Med Ethics. 2023;24:42.
133. Moon MJ. Searching for inclusive artificial intelligence for
social good: participatory governance and policy
recommendations for making AI more inclusive and benign for Publisher’s note
society. Public Adm Rev. 2023;83:1496–505. Springer Nature remains neutral with regard to jurisdictional claims in
134. Khan A, Rao S, Parvez A. Need for cultural sensitivity in the design published maps and institutional affiliations.
and devel- opment of technology to aid in dementia care: a review of
literature. In: Arai K, editor. Intelligent computing. Cham: Springer
Nature; 2024. pp. 625–36.
135. Kim MT, Heitkemper EM, Hébert ET, Hecht J, Crawford A, Nnaka
T, et al. Redesigning culturally tailored intervention in the precision
health era: self- management science context. Nurs Outlook.
2022;70:710–24.

You might also like