ChatGPT Ethical Concerns and
ChatGPT Ethical Concerns and
Abstract
Introduction: The emergence of artificial intelligence (AI) has presented several opportunities to ease human work. AI applications are available
for almost every domain of life. A new technology, Chat Generative Pre-Trained Transformer (ChatGPT), was introduced by OpenAI in
November 2022, and has become a topic of discussion across the world. ChatGPT-3 has brought many opportunities, as well as ethical and
privacy considerations. ChatGPT is a large language model (LLM) which has been trained on the events that happened until 2021. The use of
AI and its assisted technologies in scientific writing is against research and publication ethics. Therefore, policies and guidelines need to be
developed over the use of such tools in scientific writing. The main objective of the present study was to highlight the use of AI and AI assisted
technologies such as the ChatGPT and other chatbots in the scientific writing and in the research domain resulting in bias, spread of inaccurate
information and plagiarism.
Methodology: Experiments were designed to test the accuracy of ChatGPT when used in research and academic writing.
Results: The information provided by ChatGPT was inaccurate and may have far-reaching implications in the field of medical science and
engineering. Critical thinking should be encouraged among researchers to raise awareness about the associated privacy and ethical risks.
Conclusions: Regulations for ethical and privacy concerns related to the use of ChatGPT in academics and research need to be developed.
Key words: artificial intelligence; publication ethics; privacy concerns; chatbot; ChatGPT; Open AI.
Copyright © 2023 Guleria et al. This is an open-access article distributed under the Creative Commons Attribution License, which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
Introduction
In the current era, everyone is aware about the Chat Generative Pre-Trained Transformer (ChatGPT)
wonders of artificial intelligence (AI). AI has impacted OpenAI, a San Francisco-based research and
medical science, education, security, access control, deployment company, launched ChatGPT on
surveillance, and many other areas. The rapid November 30, 2022 [4]. Currently, it is funded by
developments in AI have simplified challenging tasks Microsoft Corporation and others. ChatGPT had about
in everyday life. Alan Turing, a British polymath, one million users by December 4, 2022 and currently
introduced AI in one of his monumental publications in has more than 100 million users [5]. ChatGPT is an
1950. Over the years, the technology has advanced to artificial intelligence powered conversational chatbot
develop complex algorithms which work like the variation of GPT-3 and a 175 billion parameter large
human brain [1]. Hence, AI is an umbrella term for this language model (LLM) trained on 570 GB data
field of study. This technology trains computers to learn including Wikipedia, books, news and journal articles,
human skills including knowledge acquisition, blogs and other web sources available on the internet
judgement and decision-making, and employs until 2021 [6]. The chatbot has been trained based on
computers to emulate intelligent human behaviour [2]. reinforcement learning technique from human feedback
Chatbots are an emerging technology of artificial (RLHF) and made conversational [7]. This chatbot is a
intelligence. This technology is becoming a crucial chat interface that can generate essays, poems, song
gateway for various domains such as education, lyrics, write general and research articles, and answer
medical science and health, research, customer questions depending on the user's demand. The LLM
services, security, business etc. But there is limited has also been merged with easy-to-use interfaces other
information available on its impact on these domains. than ChatGPT, such as Bing Chat and Google’s Bard to
Chatbots entertain users and can mimic human provide various opportunities. This chatbot uses an
conversation over the internet [3]. algorithm to respond to users' queries and requests often
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
sourced from the internet. The responses and science, dentistry, engineering etc. However, there are
information obtained from ChatGPT may sound also some challenges and risks. There is also a need of
shockingly human, but it is not free from errors and AI to check whether the text is AI-generated or written
limitations [8]. Moreover, the capabilities and by humans using human intelligence [8]. In recent
limitations of ChatGPT have also been listed on its years, scientific publications, including Springer Nature
homepage [9]. Another interesting OpenAI system, use various tools to combat malpractices such as paper
DALL-E 2, is capable of generating realistic unique mills, falsified results, duplicate submissions, and
images and art from text input in natural language. The plagiarism. But ChatGPT has enough potential to
newer version of DALL-E 2 generates images with four generate content in different formats and styles.
times higher resolution [10]. Additionally, the responses from ChatGPT are
There is another interesting tool called Whisper. generated at the time of the users' query. This makes the
This is a versatile speech recognition model introduced detection of plagiarism near-impossible. An editorial
by OpenAI, which is capable of transcribing, note published in Nature Machine Intelligence
recognizing and interpreting into several languages. discussed the detection of AI-generated text [16]. The
Other OpenAI models include embeddings (to convert official website of GPTZero claims to be a number one
text into numerical form), moderation (to detect AI detector covering about 1 million users worldwide.
whether the text may be unsafe or sensitive), codex (to But it cannot completely detect AI written text. This
translate natural language to code, understand and was confirmed in our study after uploading the human
generate code), Point-E (generates 3D point clouds written and AI written text in GPTZero. If such tools
from complex prompts), Jukebox (neural network are successful in identifying the machine generated text,
which generates music), and CLIP (connecting texts then they will be considered as standard along with
and images) [11]. Furthermore, GPT-4 has also been other plagiarism detection tools used in scientific
launched by OpenAI which is the more advanced writing, publishing and academics [17].
version of GPT-3 and GPT-3.5. This new version of ChatGPT is the most advanced outcome of AI
GPT accepts the images and text inputs and generates technology till date. In spite of its ethical
text output in natural language and code. GPT-4 is a considerations, the technology is helpful to non-native
more advanced version of AI trained on MS Azure AI speakers, and for generating business ideas, etc. But
supercomputers which is surpassing the ChatGPT. serious ethical concerns are associated with its use in
OpenAI also mentions the limitations of GPT-4 such as scientific writing. In addition, there are privacy
social bias and hallucinations [12]. More than 200 AI concerns about the persons and authors whose
tools have been launched so far. Some of these popular information has been added to the training data. This
AI tools are listed in Figure 1 [13]. LLM has no understanding of the real world,
motivation, or moral compass. The results obtained
ChatGPT and ethical considerations in academics and from ChatGPT reflect biases present in the training
research data. Therefore, the whole world is experimenting with
ChatGPT is a game changer and may have serious this tool's pros and cons. Both positive and negative
ethical concerns in research and academics. It has
become the “cultural sensation” in a very short time Figure 1. Trending AI tools launched in 2023.
period [7]. This chatbot has raised ethical
considerations in the research society. Noam Chomsky,
a renowned US-based researcher shared his views on
ChatGPT with the media [14]. He referred to ChatGPT
as "hi-tech plagiarism" and "a way to avoid learning".
He further added that the use of this technology by
students nowadays is a sign of the "failure of the
education system". ChatGPT has appeared as a game
changer for researchers who have been seen struggling
with how to avoid plagiarism. But the researchers are
not able to differentiate between the original and AI-
generated texts [15]. There is great potential of artificial
intelligence in various domains of science and
technology, including forensic science, medical
1293
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
aspects of ChatGPT in scientific writing have been Figure 2. Positive and negative sides of ChatGPT in scientific
represented in the Figure 2. Recently, the International writing.
Conference of Machine Learning (ICML) outlined its
policy on the use of ChatGPT in the call for papers for
ICML 2023 [18]. The LLM policy for ICML 2023 has
announced the prohibition of entirely LLM-generated
texts such as Open AI's ChatGPT in the papers. The
ICML further included the upgradation of the policy in
future conferences considering the impacts of AI-
generated texts on scientific publications [17,18]. The
main objective of the present study was to test the
accuracy and reliability of ChatGPT and to highlight the
associated ethical considerations in academics and
research.
wrongly listed in the article written by ChatGPT.
Methodology
Further, the references provided by ChatGPT for the
We designed experiments to test the authenticity
test research paper were critically reviewed as shown in
and accuracy of ChatGPT if it is used as a tool in
Figure 4 and Figure 5. However, the references
scientific writing and the ethical concerns surrounding
generated by the ChatGPT do not seem to be correct and
its use.
on searching the internet sources thoroughly, we could
In experiment 1, ChatGPT was prompted to write
not find the same references in the scientific literature.
an article on “facial identification of humans wearing
This means that the references were created by the
masks.” A separate command was given to provide
program itself without any accuracy and reliability.
references for the article generated and critical analysis
In the case of reference 1 (Alotaibi, S., &
of references was done by us.
Mahmood, A. (2021). Face recognition during COVID-
In experiment 2, the article was tested on GPTZero
19 pandemic: A review of challenges and solutions.
which claims to detect AI generated text. Further, the
Journal of King Saud University-Computer and
content generated by ChatGPT was also tested for
Information Sciences, 33(1), 1-7), the publication was
plagiarism.
searched directly on Google search engine with its title,
but no such publication was found. Further, the search
Results
was continued within the journal which is actually a
Experiment 1
Scopus indexed journal (Journal of King Saud
In experiment 1, ChatGPT was asked to write an
article on facial identification of humans wearing
masks. The chatbot provided content in a format as Figure 4. No record of reference #1 in the journal mentioned by
shown in Figure 3a. In response to our command, ChatGPT.
ChatGPT wrote an article which seems to be correct. A
second command was given to provide references for
the former content (Figure 3b). However, we could not
access the references obtained via ChatGPT through the
internet sources. It appears that the references are
Figure 3. Response obtained from ChatGPT on prompt given
for writing a research paper.
1294
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
University-Computer and Information Sciences, Figure 5. Search results of references generated by ChatGPT in
CitesSore 11.9 since 2021) in which the paper was respective databases.
published according to the references generated by
ChatGPT. Moreover, the full issue was downloaded to
determine whether such article exists. But no such
record of the paper was found anywhere that we
searched (Figure 4). This means the reference 1, as
mentioned by ChatGPT, does not exist.
Reference 2 (Bílek, P., Špaňhel, J., & Pohanka, M.
(2020). Facial recognition during the COVID-19
pandemic: The impact of face masks on security and
privacy. Safety Science, 132, 104952) generated by
ChatGPT shows the article published in the Safety
Science journal, which is a highly rated and
international journal published by Elsevier with an
impact factor of 6.392. But again, when this article was a) No record of reference #2 generated by ChatGPT; b) No record of
searched, no such article was found. Further, the article reference#5 found in IEEE database; c) No record found for reference
was thoroughly searched in the journal homepage and #6; d) No record found for reference #7.
in the volume in which it was supposedly published
according to ChatGPT. But no such record was found incomplete and the year of publishing of the textbook
as shown in Figure 5a. This again shows that the was wrong. The actual publishing year is 2011 [21].
ChatGPT is creating the references on its own and the According to reference 5 obtained from ChatGPT,
references are totally wrong. This shows that an AI the authors are “Ko, B.C. & Choi, J. (2020)”. After
program such as ChatGPT cannot replace the human searching on the internet, no such record was found
brain. (Figure 5b). However, an unusual thing was observed;
Reference 3 (Jain, A. K., & Ross, A. (2020). that same publication with almost similar title but
Handbook of biometrics for forensic science. Springer) different authors exist [22].
is a book and is completely inaccurate. The book edited We were unable to access reference 6 (Yan, J., &
by Jain and Ross is actually “Handbook of Biometrics” Huang, Y. (2020). Face recognition based on thermal
[19] whereas; the editors of “Handbook of Biometrics images in the dark. In 2020 IEEE 3rd International
for forensic science” [20] are Massimo Tistarelli and Conference on Information and Computer
Christrophe Champod. Hence, it can be clearly Technologies (ICICT) (pp. 65-70). IEEE). No such
observed that the content provided by ChatGPT is record was found anywhere (Figure 5c).
inaccurate, non-existent and hold no authenticity which Similarly, no record was found for reference 7
is a primary concern and cannot be overlooked. (Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2021).
Reference 4 (Jain, A. K., Ross, A., & Nandakumar, Facilitating face recognition under COVID-19:
K. (2016). Introduction to biometrics. Springer) was Recognizing masked faces based on depth information.
IEEE Transactions on Information Forensics and
Security, 16, 1102-1113) (Figure 5d).
Figure 6. a) Output given by GPTZero on AI generated text
(ChatGPT), b) GPTZero failed to identify the AI generated
references. Experiment 2
Further, we saved the content in a word file which
was generated by ChatGPT in the previous experiment
and uploaded it on GPTZero. The AI generated text was
highlighted by GPTZero. However, the references were
not highlighted (Figure 6a and 6b); although, the
references were also generated by AI. Thus, GPTZero
failed to recognise the references provided by ChatGPT
because the various parts of the references are mixed
with one another and the references are wrong and
could not be located in the literature. In other words,
although the program claims to be accurate, it is not
1295
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
actually 100% reliable. The research paper content Figure 7. Plagiarism report of research paper created by
provided by ChatGPT on “facial identification of ChatGPT.
human wearing masks” was tested for plagiarism on
“Urkund Plagiarism Detection software”. Plagiarism
report stated that the document is original and “about
0% of this document consisted of text similar to text
found in other sources” (Figure 7).
Discussion
The experiments were performed to test the
accuracy of ChatGPT related to research and
publication ethics. We explored the research studies on
ChatGPT. We obtained mixed results stating that the endanger people's safety and health [15,23,24]. A
responses given by ChatGPT were accurate, while our Lancet study also discussed the same concern with
experiments indicated that ChatGPT provided false and using ChatGPT for clinical report writing [25]. The
incorrect results. After surveying the literature on study pointed out that the use of AI in medical and
ChatGPT, we realized that it is only meant to get ideas, health care can be useful, but it should be carefully
upgrade skills, streamline writing process etc. It is also regulated and monitored because such a machine
well trained on how to respond on illegal, unsuitable system does not have the same intelligence as human
and inappropriate queries. From research and brain. The AI approach provides patterns of words
publication ethics perspective, the information and based on data that has been trained and seen before.
content generated cannot be used in scientific Academics strongly need to prioritize critical thinking
publications and research papers. This is because for written assignments that ChatGPT cannot do. This
research papers often become the foundation of certain initiative will enable us to think more rather than
domains for example, in medical and health sciences, utilizing such tools [26].
where there is a matter of life and death. This is the There is a debate among all the publishers, editors
reason the research papers are always peer-reviewed by and researchers on whether using AI tools in scientific
experts in the field. writing and citing it as an author in published literature
In experiment 1, we observed that ChatGPT is appropriate or not [27]. However, there is published
provided content in a well-structured format. When literature in which ChatGPT has been cited as an author
critical analysis of all the references was done, it was [4,28]. It is a matter of great concern how such
observed that there is no authenticity in the references. technologies are disrupting the future of academia and
All the references were either incorrect or incomplete. research and retarding the human brains. In our opinion,
In experiment 2, we tested the content on GPTZero. citing chatbots in the published literature is entirely
However, GPTZero failed to detect the references inappropriate because chatbots cannot take
section generated by ChatGPT. Furthermore, we responsibility for the content and accuracy of the
checked plagiarism of the content generated by scientific literature. If, for example, the paper needs to
ChatGPT. The report obtained from Urkund detected be retracted, then who will be responsible for the
0% similarity in the content. retraction; can ChatGPT be held responsible for this
Based on these experiments, we interpreted that the kind of act? Therefore, various committees on
use of chatbots in scientific writing should be avoided. publication ethics such as the Committee on Publication
Use of such chatbots or AI tools in scientific writing Ethics (COPE) must soon formulate strict policies [29].
will lead to spread of wrong information. The reason This will ensure that wrong, biased, and inaccurate
behind such practices may include peer pressure and information will not get published through these tools
competition among researchers in academia for because there is no authentic source for the information
publications, struggle with the writing process etc. Else provided by the chatbots. A study by Stokel-Walker
mentioned that researchers cannot differentiate between mentioned citing ChatGPT as an author in scientific
the original and AI generated texts [15]. Furthermore, publications, and most of the scientists did not approve
he also added that no plagiarism was detected in it [30]. However, until now, at least 62 citations have
ChatGPT. So, the journals may need to take a more already been credited to ChatGPT in scientific literature
rigorous approach to ensure that content is accurate in [4]. An editorial note published in Nature Machine
domains like medicine where fake information could Intelligence [17] also discussed the same concern of
1296
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
crediting large language model in scientific ultimately responsible and accountable for the contents
publications because AI cannot take the responsibility of the work.”
for published work; Stokel Walker also discussed the “Authors should disclose in their manuscript the use
same [30]. The journal Nature defined three principles of AI and AI-assisted technologies and a statement will
that should be followed when publishing research: appear in the published work. Declaring the use of these
transparency, integrity and truth from the authors. technologies supports transparency and trust between
In an online survey of 672 readers conducted by authors, readers, reviewers, editors and contributors and
Nature, around 80% of respondents had used ChatGPT facilitates compliance with the terms of use of the
once, while 8% used ChatGPT on an everyday basis and relevant tool or technology.”
14% several times per week [31]. Moreover, 38% were “Authors should not list AI and AI-assisted
familiar with other researchers who prefer such tools in technologies as an author or co-author, nor cite AI as an
research or teaching. AI may prove to be a helpful tool, author. Authorship implies responsibilities and tasks
for example, for those people who find English writing that can only be attributed to and performed by humans.
difficult. But its limitations, such as stunting the Each (co-) author is accountable for ensuring that
learning process and critical thinking among questions related to the accuracy or integrity of any part
researchers, should also be considered. Dwivedi et al. of the work are appropriately investigated and resolved
discussed in detail the multidisciplinary perspectives on and authorship requires the ability to approve the final
opportunities and challenges of conversational artificial version of the work and agree to its submission. Authors
intelligence for research, practice and policy [32]. The are also responsible for ensuring that the work is
study mentioned that the challenges with ChatGPT in original, that the stated authors qualify for authorship,
academics are well recognized due to lack of ethical and the work does not infringe third party rights, and
considerations and guidelines. Therefore, the research should familiarize themselves with our Ethics in
and publishing ethics need to be revised from time to Publishing policy before they submit.”
time. Usually, the researchers struggle with writing AI tools also pose privacy risks, which should be
process which lead them to opt for such tools and taken care of while using them. This had been already
practices to ease their work. Elsevier has taken the exposed when ChatGPT data was leaked. AI tools have
initiative by issuing the new publishing ethics access to user’s data which they can use for their own
guidelines on the use of AI and its assisted technologies benefit. Furthermore, ChatGPT does not ask the users
in scientific writing. Elsevier regulated the policy after for their consent. It simply searches everything on web.
the increased use of AI in scientific writing. The policy Therefore, one should be careful while using AI and AI
aims to provide guidance and transparency to all the associated technologies.
readers, authors, reviewers, editors, contributors etc.
The guidelines issued by Elsevier [33] state that: Conclusions
“This policy has been triggered by the rise of ChatGPT has emerged as a topic of debate in the
generative AI and AI-assisted technologies which are research and academics domain. This technology has
expected to increasingly be used by content creators. come with a number of opportunities as well as ethical,
The policy aims to provide greater transparency and and legal challenges, and the technology has had both
guidance to authors, readers, reviewers, editors and positive and negative impact in various domains. We
contributors. Elsevier will monitor this development expressed our views on the use of AI, and chatbots like
and will adjust or refine this policy when appropriate. ChatGPT in scientific publications based on the results
Please note the policy only refers to the writing process, of our experiments. Our main aim was to bring attention
and not to the use of AI tools to analyze and draw towards the growing role of artificial intelligence in
insights from data as part of the research process.” research and scientific writing. As mentioned above,
“Where authors use generative AI and AI-assisted currently, there are no standard guidelines by journals
technologies in the writing process, these technologies on AI-generated texts in scientific writing. Therefore,
should only be used to improve readability and publication and research ethics committees and journals
language of the work. Applying the technology should need to regulate guidelines on AI-generated text from
be done with human oversight and control and authors advanced chatbot tools such as ChatGPT. Whether or
should carefully review and edit the result, because AI not a journal permits its use should be mentioned in the
can generate authoritative-sounding output that can be journal’s guidelines. In addition, some standard tools or
incorrect, incomplete or biased. The authors are software should be developed to detect the machine-
generated information so that biases can be avoided.
1297
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
1298
Guleria et al. – ChatGPT: ethical concerns and challenges J Infect Dev Ctries 2023; 17(9):1292-1299.
21. Jain AK, Ross A, Nandakumar K (2011) Introduction to 31. Owens B (2023) How Nature readers are using ChatGPT.
biometrics, 1st edition. New York: Springer US. doi: Nature 615: 20. doi: 10.1038/d41586-023-00500-8.
10.1007/978-0-387-77326-1_1. 32. Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar
22. Azeem A, Sharif M, Raza M, Murtaza M (2014) A survey: face AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M,
recognition techniques under partial occlusion. Int Arab J Inf Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J,
Technol 11: 1-10. Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Carter L,
23. Harrer S (2023) Attention is not all you need: the complicated Chowdhury S, Crick T, Cunningham SW, Davies GH, Davison
case of ethically using large language models in healthcare and RM, Dé R, Dennehy D, Duan Y, Dubey R, Dwivedi R,
medicine. eBioMedicine 90: 104512. doi: Edwards JS, Flavián C, Gauld R, Grover V, Hu MC, Janssen
10.1016/j.ebiom.2023.104512. M, Jones P, Junglas I, Khorana S, Kraus S, Larsen KR,
24. Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A (2023) Latreille P, Laumer S, Malik FT, Mardani A, Mariani M,
Generating scholarly content with ChatGPT: ethical challenges Mithas S, Mogaji E, Nord JH, O'Connor S, Okumus F, Pagani
for medical publishing. Lancet Digit Health 5: 105-106. doi: M, Pandey N, Papagiannidis S, Pappas IO, Pathak N, Pries-
10.1016/S2589-7500(23)00019-5. Heje J, Raman R, Rana NP, Rehm SV, Ribeiro-Navarrete S,
25. Ali SR, Dobbs TD, Hutchings HA, Whitaker IS (2023) Using Richter A, Rowe F, Sarker S, Stahl BC, Tiwari MK, van der
ChatGPT to write patient clinic letters. Lancet Digit Health 5: Aalst W, Venkatesh V, Viglia G, Wade M, Walton P, Wirtz J,
179-181. doi: 10.1016/S2589-7500(23)00048-1. Wright R (2023) "So what if ChatGPT wrote it?"
26. Stokel-Walker C (2022) AI bot ChatGPT writes smart essays - Multidisciplinary perspectives on opportunities, challenges
should academics worry? Nature. Available: and implications of generative conversational AI for research,
https://www.nature.com/articles/d41586-022-04397-7. practice and policy. Int J Inf Manage 71: 102642. doi:
Accessed: 24 February 2023. doi: 10.1038/d41586-022-04397- 10.1016/j.ijinfomgt.2023.102642
7. 33. Elsevier (2023) Publishing Ethics. Available:
27. Rahimi F, Abadi ATB (2023) ChatGPT and publication ethics. https://www.elsevier.com/about/policies/publishing-
Arch Med Res 27: 272-274. doi: ethics#Authors. Accessed: 19 March 2023.
10.1016/j.arcmed.2023.03.004.
28. Transformer GGP, Thunström AO, Steingrimsson S (2022) Corresponding author
Can GPT-3 write an academic paper on itself, with minimal Dr. Kewal Krishan, PhD, FRAI
human input? Available: https://hal.science/hal-03701250. Professor and former Chair, Department of Anthropology,
Accessed: 26 March 2023. (UGC Centre of Advanced Study), Panjab University, Sector-14,
29. COPE (2023) Promoting integrity in research and its Chandigarh, India.
publication. Available: https://publicationethics.org/. Tel: +919876048205
Accessed: 26 March 2023. E-mail: gargkk@yahoo.com; kewalkrishan@pu.ac.in
30. Stokel-Walker C (2023) ChatGPT listed as author on research
papers: many scientists disapprove. Nature 13: 620-621. doi: Conflict of interests: No conflict of interests is declared.
10.1038/d41586-023-00107-z.
1299
© 2023. This work is published under
https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding
the ProQuest Terms and Conditions, you may use this content in accordance
with the terms of the License.