0% found this document useful (0 votes)
130 views31 pages

Aiet S 24 00006

Uploaded by

deepashukla.de
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views31 pages

Aiet S 24 00006

Uploaded by

deepashukla.de
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

AI and Ethics

AI in Academia: Striking the Ethical Balance and Collaborative Potential


--Manuscript Draft--

Manuscript Number:

Full Title: AI in Academia: Striking the Ethical Balance and Collaborative Potential

Article Type: Opinion Paper (Invited)

Funding Information:

Corresponding Author: Deepa Shukla, Ph.D.


IIT Jodhpur: Indian Institute of Technology Jodhpur
Jodhpur, Rajasthan INDIA

Corresponding Author Secondary


Information:

Corresponding Author's Institution: IIT Jodhpur: Indian Institute of Technology Jodhpur

Corresponding Author's Secondary


Institution:

First Author: Deepa Shukla, Ph.D.

First Author Secondary Information:

Order of Authors: Deepa Shukla, Ph.D.

Order of Authors Secondary Information:

Author Comments:

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Title Page Click here to view linked References

1 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
AI in Academia: Striking the Ethical Balance
27
28 and Collaborative Potential
29
30 Deepa Shukla
31
32 Indian Institute of Technology, Jodhpur
33 India
34
35
p22hs201@iitj.ac.in
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 2
2
3
4
5 Abstract:
6
7 AI tools' rapid development and deployment affect our social institutions and daily lives. In my
8
9 project, I aim to evaluate the use of Generative Artificial Intelligence (GenAI, hereafter) tools in
10
11 academia. Due to the easy access and convenience, these tools gained rapid popularity among
12
13 students as they can generate context-specific content based on given prompts. Student's
14
15 reliability on these tools is burgeoning as these models are very good at assisting students with
16
17
academic tasks. Our question of concern is, ‘Who is wrong? Students as they are using these
18 Generative tools or the one who is evaluating?’ ‘Why using such tools is a matter of ethics?’
19
20 ‘Why using GenAI tools in research projects is ethically wrong?’ ‘Why do we have tools like AI
21
22 plagiarism checker, Turnitin, etc?’ ‘Are they guilty?’ ‘Are they hurting anybody?’ ‘What can we
23
24 do about it?’ ‘Do we have an alternate way?’ ‘Can we use these GenAI tools without feeling
25
26 guilty?’ ‘Can we bring out a productive collaboration between academia and the AI industry?’
27
28 Keywords: GenAI, ChatGPT, Ethics, Responsibility
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 3
2
3
4
5 Introduction:
6
7 The deployment of AI has become really important, as its potential to improve efficiency,
8
9 accuracy, and reduced costs has revolutionized almost all the fields of human society, including
10
11 finance, healthcare, education, gaming, and more. It is almost everywhere, and our ignorance of
12
13 its adaptability is useless. It has changed how we interact with our society. AI-powered
14
15 technologies like computer vision, image and audio recognition, and natural language processing
16
17
have entirely transformed how we engage with and consume media.
18
19 In this paper, I will specifically explore the impacts of Generative Artificial Intelligence in
20
21 academics and its deployment in our society. When we talk about Generative AI, one of the
22
23 significant contributions was made by OpenAI by introducing GPT-3.5 (the latest model of
24
25 ChatGPT). There are other mind-blowing inventions, like Perplexity AI (language model); for
26
27 image generation, we have Dell-E, mid-journey; for sound, we have AIVA, Murf.ai; for coding
28 CodeT5, Tabnine, etc. Since we are concerned with academic writing, I will explore the impacts
29
30 of Language models. Consider, for instance, GPT-3 and ChatGPT; both of these models use
31
32 NLP and generate a significant amount of data based on users' requests and prompts; they have
33
34 garnered significant attention and have revolutionized a wide variety of language-related tasks,
35
36 be it poems, summaries, open-ended prompts, citation creations. Both were invented by OpenAI
37
38
in 2015 with the aim of common good.
39
40 And for the same reason, the laboratory has received significant support from Microsoft
41
42 Corporation and Elon Musk. Consequently, the laboratory development made another vital
43
44 contribution as they introduced DALL-E. DALL-E is an application of machine learning that
45
46 creates pictures depending on inputs received from users. It understands and generates novel
47
48
images using Artificial Neural Networks with multimodal neurons. ChatGPT and DALL-E
49 received tremendous public popularity within the seven days of their release. But, given the
50
51 consequences, the question is, was it really for the common good?
52
53
54 GenAI: Costs and Benefits
55
56
57 Generative Technology presents opportunities as well as, ethical and legal, challenges and
58
59 possesses both the potentials, that is to have positive as well as negative impacts on individuals,
60
61
62
63
64
65
1 4
2
3
4 societies and organizations. On the one hand we have decent benefits of using Language Models
5
6 like, it could be used to save researchers time, and their expenses by composing descriptions of
7
8 their discoveries and by formatting the paper as per users requirements or journals guidelines, it
9
10 can also be used to revise and clarify users/authors content, it can be used as a conversational
11
12 resources, as an idea generator or brainstorming system, where you are not relying on it for
13
14
content rather using it as an expert with whom you are discussing the idea to make it clearer, it
15 can also be used as a recommender system, i.e. to identify relevant research studies. On the other
16
17 hand there are major costs of using these generative tools in research like; There have been
18
19 various concerns regarding academic integrity, plagiarism, ethics, and the limitations of GenAI
20
21 have been raised. AI-generated answers to academic writing assignments reveal that while the
22
23 content was largely unique and pertinent to the subjects, it also lacked human viewpoints and
24
25
excessively cited inappropriate sources. It is also difficult to construct appropriate prompts for
26 second language learners since it demands a certain level of language proficiency. Additionally,
27
28 relying too much on GenAI tools can undermine students' sincere attempts to become proficient
29
30 writers. Furthermore, if the dataset used to train a model contains aspects that are biased,
31
32 erroneous, or dangerous, then it will affect the generated materials for sure. It means the output
33
34 will have some effect of the inputs (i.e. fed data). Thus, the widespread application of GenAI
35
may endanger academic integrity.
36
37
38 Costs Benefits
39
40 Might Consume personal or private Brainstorming/exploring the instructor’s
41 information (text or images) prompt
42
43
Citations and quotes may be invented Generating outlines
44 Responses may contain biases Models of genres or types of writing
45 Might generate plagiarized content Summarizing longer texts
46
47 Information may be false Editing/refining
48 While Paraphrasing, ideas may be changed Translation
49
50 unacceptably
51
52 Table 1: Costs and benefits of GenAI
53
54
55
56 However, these costs and benefits cannot hide the ongoing adaptation of language models and
57
58 different AI tools in society. People are using these models for almost all of their concerns, to
59
60
generate summaries, to write online blogs, and academic write-ups regardless of inner biases,
61
62
63
64
65
1 5
2
3
4 inaccurate information, etc. Now, my agenda here is to point out that the growing use of these
5
6 technologies will continue, and the problems we are concerned with will vanish with the
7
8 improved data sets in the future. Therefore, apart from the fundamental issues, we need to bring
9
10 ethics and morals into our use of these technologies. Because blind and excessive use can lead to
11
12 other problems, like physical harm to a larger community, development of bad intentions and
13
14
GenAI polarized content, etc. Therefore, we need some principles that can help us form a
15 collaborative approach that can allow students to use GenAI in their academic projects, essays,
16
17 research, etc, without any sense of guilt.
18
19
20 But before we discuss the parameters and principles of ethical use of these GenAI tools, we need
21
22 to answer the why question. Why do we need ethics in the first place? Of what use is this
23
24
responsibility? Why do we need boundaries in our use of GenAI? Can’t we be limitless? Are
25 ethics and responsibility essential? If yes, why? To answer this, we must answer the primary
26
27 question: “Why Morality and Ethics are necessary?” “Why should one be moral, ethical, and
28
29 responsible?” because, I assume, the answer to that question will automatically answer ‘Why do
30
31 we need to use GenAI ethically and responsibly?”.
32
33
34 Significance of Ethics and responsibility
35
36
37 The primary reason that makes sense for the significance of boundaries, i.e., the importance of
38 ethical and moral use of GenAI tools in academia, is to escape the feeling of guilt, remorse, and
39
40 the feeling of being unjust. Now, one thing is clear: using GenAI tools in academia does not
41
42 cause any major harm to others physically or psychologically, but is it really the case?
43
44
45 There is no direct harm, but indirectly, mindless use of GenAI tools in academia can cause
46
47 significant damage. The misinformation compiled by these GenAIs has drawbacks and is
48
49 particularly harmful to academic publication since, as we all know, advancement depends on the
50 dissemination of accurate knowledge. There is a significant danger of injury when inaccurate
51
52 facts are presented in a scientific environment. For example, if a research aims to elaborate on
53
54 how individual and communities’ health problems are managed and cured, and for that matter, if
55
56 the researcher mindlessly used language models and statistical data generated by GenAI tools.
57
58 Since this research is later applicable to larger communities, it can affect their health and life.
59
60
61
62
63
64
65
1 6
2
3
4 Consequently, if we aim to avoid such consequences, we have to mindfully accept the
5
6 responsibility not only in our use but towards the community as well. Similarly, it can also affect
7
8 communities and even individuals if there is any type of bias in the content generated by GenAI;
9
10 these types can be either data-based or tool-based. And it can potentially affect huge
11
12 communities based on certain assumptions or prejudices. Just a minor glitch: ignorance can lead
13
14
to huge problems.
15
16 Utilitarianism Vs Deontological Ethics: A perspective
17
18
19 Given the negative impact, consequentialists will never trade communities for individual
20
21 benefits. But one the other hand, it carries greater benefits and potential to improve students'
22
23 creative and innovative skills. Utilitarianism, as a consequentialist, only cares about the outcome
24
25 of an action regardless of the procedure. While it is nice to have a fair procedure, improvements
26
27 in the pertinent outcomes are what ultimately count as ethically good. Good measures are those
28 (like using AI) that enhance the related products (like art aesthetics).
29
30
31 Recently, a survey was conducted by Arvin V. Duhaylungsod and Jason V. Chavez to evaluate
32
33 the utilitarian benefit of employing ChatGpt—a generative tool—in research projects. According
34
35 to the participants' comments, there are several advantages to using ChatGPT and AIs in
36
37 technology-based learning for creative and imaginative tasks. One participant mentioned that
38 ChatGPT saves his time by allowing him to produce ideas while using reliable sources instead of
39
40 reading articles on Google Scholar. According to the data acquired, students have an edge while
41
42 using ChatGPT and AIs to generate content for class projects and turn in unique and original
43
44 work. To understand the significance of GenAI in academia, I have quoted students response
45
46 below;
47
48
49
 Student1: "For inventiveness in terms of essentially applying, in case you're looking for
50 prompt answers to your inquiries. You have to read every piece or the entire article to
51
52 find what you're looking for in some reliable sources, like Google Scholar, so you can't
53
54 just find the answer for specific literature. "
55
56  Student 4: "I believe that being creative can aid in developing your ideas. I am, therefore,
57
58 working on my thesis. Because using GenAI can help you develop and extract additional
59
60
61
62
63
64
65
1 7
2
3
4 ideas from what you want to perform in your research, with that I am beginning to
5
6 construct my thesis for my fourth year of study. "
7
8  Student 5: "In my opinion, applying AI models to research or thesis projects could foster
9
10 innovative thinking. In my opinion, it can bring your thoughts to their most excellent
11
12 forms. "
13
14
15
Twenty respondents concurred that using GenAI to complete tasks allows them to improve their
16 capacity for creativity and innovation. Five participants reported that ChatGPT improved their
17
18 ability to generate ideas, construct words, and broaden their knowledge on a specific subject. No
19
20 doubt, it has the potential to generate ideas and develop things that are otherwise not possible.
21
22 It's like having a conversation with some expert. " Consequently, one can divide the utility of
23
24 using GenAI into two parts; first, benefits, defined as the functional advantage or usefulness
25
derived from using AI chatbots to serve individual purposes. On the other hand, the individual
26
27 impact, such as improved task performance or productivity enhancement, pertains to task-
28
29 oriented objectives.
30
31
32 It is impossible to overlook, nevertheless, the fact that it also restricts one's capacity for
33
34 innovation and creativity. One explanation for this is that they become so dependent on these
35
36
tools that they lose track of their ability to come up with original ideas. Furthermore, ten
37 respondents emphasized that it is not a good idea to rely too much on AIs because they cannot
38
39 supply precise information for your references in academic texts.
40
41
42 However, if we evaluate this problem from the Kantian Ethics perspective, it seems to reject
43
44 such use. Deontological Ethics is based on core philosophical principle that prioritizes an action's
45
46
process over its outcome. He prefers the means over ends, which means Kantian Ethics is not
47 concerned with the consequences; instead, he emphasizes the means used to achieve the goals.
48
49 For him, moral actions followed by an individual must have their basis in a priori principles or
50
51 the maxims of actions. In principle, Kant was more concerned with the quality of the intention
52
53 behind an individual's actions. He was more concerned with an action's intrinsic moral worth,
54
55 regardless of its consequences. This doctrine emphasizes more on individual duties, rules, and
56
the inherent moral nature of actions of those actions. Now, the obligatoriness of an action
57
58 depends on these principles, and an action, regardless of its consequences, will never be
59
60
61
62
63
64
65
1 8
2
3
4 permissible if it goes against these principles. For instance, Breaking Promises is wrong
5
6 irrespective of any costs/benefits it results in.
7
8
9 Considering this perspective, the core Deontological ethics will never allow the use of GenAI
10
11 tools in academic research, even if it is generating better results and helping students to produce
12
13 better research outcomes. The process is not authentic; the content is copied. It's like one is
14 stealing not only content but sometimes ideas, too. Such a procedure also violates the third
15
16 maxim of Kant's Categorical Principle: "Do Not Use Other as Means." Here, we are not using
17
18 any human being as such; it's just an AI tool, primarily an instrument only; it was invented to be
19
20 used as an instrument only. Therefore, I believe this argument does not have much force. But
21
22 then why doesn't it feel right? Why does it look bad and ethically wrong to rely on such tools?
23
24 Since for Kantian Ethics, It is the process that matters and not the outcome. Can we really award
25
26 a prize to someone who skipped the essential process of such outcomes? The ethical integrity of
27
28 art connects to the artists and the process. Just think of two artists who participated in a painting
29
30 competition; one used his imagination, skills, and experience to produce a certain piece, whereas
31
32 the other just typed some prompts on DELL-E and got it generated. Who is more eligible for the
33
34 prize?
35
36
37
Should we really judge the use of AI on deontological or teleological grounds?
38
39 The objection proposed by some researchers is built on the principle of "Non-auditability."
40
41 Operating on the neural network, OpenAI's, ChatGPT presents an advanced capacity but also a
42
43 hurdle known as the "black box" problem: while we may comprehend the broad principles of the
44
45 model, the rationale behind individual decisions remains perplexing. The human brain served as
46
47 the model for the neural network, which offers a dynamic, adaptable framework that records
48 patterns but lacks an auditable, comprehensible formula.
49
50
51 We assume, for the purposes of our argument, that using AI has some quantifiable advantages.
52
53 For instance, speeding up writing can increase researcher productivity in science and research.
54
55 However, there are also areas of society where decisions are made based on the principles of
56
57 deontology, and there are others where decisions are made on teleological grounds in a clear and
58
reasonable manner. Consequently, to make decisions about AI, we should follow specific instead
59
60 of general guidelines. For instance, it doesn't seem reasonable to reject the employment of AI
61
62
63
64
65
1 9
2
3
4 outright in any situation based alone on the aforementioned "black box" (non-auditability) issue.
5
6 Instead, the inquiry is: Is auditability a requirement inherent in the relevant domain? Let us
7
8 demonstrate this with two thought experiments.
9
10
11 Sports is one sector of society where the majority of people concur that human performance is
12
13 what matters most. The main focus of a football game is not on scoring as many goals as
14 possible; instead, it is on how solid teams and individual players perform in a "fair" match.
15
16
17 Thought Experiment 1: Your rival at the Olympics shows up for the 100-meter sprint finals
18
19 wearing a prototype robot leg controlled by artificial intelligence that surrounds his real legs.
20
21 Your opponent claims that since this gives them the ability to sprint the 100 meters twice as
22
23 quickly as you do—finishing in 5 seconds as opposed to 10—it is an ethically permissible
24 upgrade. Do you think your opponent's employment of this AI-robotic improvement during the
25
26 Olympics is morally righteous and equitable?
27
28
29 Now, if your belief aligns with the generally acceptable principle that a sports game is all about
30
31 playing fair and better than your opponent, in that case, the above thought experiment indeed
32
33 violates the fair play criteria. You'll probably conclude that using AI in this manner is unethical
34
since it is unjust and "against the Olympic spirit."
35
36
37 On the other hand, most people agree that in the medical field, results come first. The medical-
38
39 ethical theory of "beneficence" holds that physicians have a moral obligation to advocate for the
40
41 course of treatment that best serves the patient's interests in terms of health, survival rates, etc.
42
43
44
Thought Experiment 2: It has been determined that your child has a cardiac issue that requires
45 immediate, high-risk surgery. There are two alternatives accessible at the university hospital: one
46
47 is having your baby operated on by the most skilled human surgeons available, with a 40%
48
49 success rate. Or, an AI surgeon with an 80% survival rate may be used as an alternative. Do you
50
51 think it is morally right for the medical center to provide the AI alternative and for you to decide
52
53 which is better for your baby's chances of survival?
54
55 Now again, if your sentiments align with general medical principles, you will choose the course
56
57 of action that will increase the likelihood of your baby's survival with successful heart surgery.
58
59 This is true even if you, as the parent, are aware of the black box problem and other possible
60
61
62
63
64
65
1 10
2
3
4 "unethical" viewpoints on the same, such as the loss of human knowledge, the job losses of
5
6 medical professionals, or the ethics involved in developing such AI surgeons. It's not immoral of
7
8 you to put your child's survival above these worries. Similarly, you would undoubtedly think it
9
10 was immoral for medical professionals to deny the AI option for your infant just because they
11
12 were afraid of losing their jobs.
13
14 The experiments mentioned above threw us toward the two opposing conclusions. In some
15
16 scenarios (which were earlier handled by humans), it is ethical to use AI tools, but not in others.
17
18 This is not contradictory or opposing; instead, it is because we judge various social domains
19
20 using different criteria, depending on what's considered suitable in each case. Consequently, we
21
22 should evaluate the deployment of AI tools using different standards. On the one hand, Human
23
24
accomplishment in athletics is recognized for the commitment, resiliency, talent, and
25 representation of their efforts. Artificial Intelligence (AI) would undermine these principles by
26
27 prioritizing technology gain over human ability. On the other hand, patient outcomes are crucial
28
29 while receiving medical care. If artificial intelligence (AI) provides real advantages in drug
30
31 research, diagnosis, or therapy, then it should be applied.
32
33
34 Now the question is, how should we deal with our problem? How can we evaluate the use of
35 GenAI tools in academics? Is it more akin to the sports scenario or the medical scenario?
36
37
38 It may seem apparent to those outside of academia that science and research, which are
39
40 endeavors aimed at creating new knowledge, should be outcome-driven. Our goal is for
41
42 science—frequently supported by government dollars—to create knowledge. As a result, it is
43
44 better in medical scenarios to prioritize the "best outcomes" instead of the "fair process." In other
45 words, science's contribution to society is the effective spread of information. Even with
46
47 measurements and rankings, which are presently commonly employed, science is not about
48
49 academics, institutions, or publications competing in a way akin to sports. Therefore, it is not
50
51 "unfair" if author A uses generative AI to write 20% quicker than author B. Rather, it appears
52
53 that by not utilizing the finest available tools and technology, B is losing out on 20% of their
54
55 potential productivity—time that could be better spent.
56
57 Given its social function in our field of study and science, we believe that a teleological
58
59 evaluation of ethical usage needs to be given precedence and that artificial intelligence ought to
60
61
62
63
64
65
1 11
2
3
4 be applied where it may be a valuable instrument for knowledge advancement. However, one
5
6 should not use these tools mindlessly. Therefore, instead of preferring one theory over other, I
7
8 would suggest a collaborative balance between these two theories, i.e. a necessary combination
9
10 of consequentialist and deontological ethical frameworks. Deontological ethics will emphasize
11
12 the morality of deeds regardless of their consequences since it is based on principles and
13
14
obligations. This means that during the study process, one must adhere to established standards
15 and ethical criteria with steadfast devotion in the framework of GenAI. This entails maintaining
16
17 impartial representation, protecting privacy, and openly recording the methods. On the other
18
19 hand, a consequentialist viewpoint demands that the possible effects of using GenAI in research
20
21 be carefully considered. Researchers have to balance the advantages—such enhanced
22
23 productivity and novel perspectives—against the disadvantages, including biases and unforeseen
24
25
repercussions. When using GenAI, it's important to strike a harmonious equilibrium between
26 deontological doctrines and consequentialist thinking by considering both the wider social
27
28 ramifications and ethical norms. This strategy makes sure that the creation and application of
29
30 GenAI in research publications advances knowledge while respecting the core values of moral
31
32 behavior and accountability. Therefore, we need collaboration, some mutually agreed principles
33
34 that promote an ethical, responsible, and sensible approach that allows GenAI use in academia
35
and research papers.
36
37
38 The principles: An ethical and responsible approach
39
40
41 In recognition of this, I propose the following principles for the collaborative engagement of
42
43 Humans and AI in academic research writings because it is not inherently unethical.
44
45
46 1. Sense of Responsibility: Sometimes, datasets used to train these generative models are
47
48
not anonymous; therefore, they might include sensitive data that one should not use in
49 research. Consequently, one should respect privacy and make sure that they are not using
50
51 AI-generated data directly. Additionally, due to specific data sets, these GenAIs have the
52
53 potential to generate wrong and biased content. Therefore, cross-referencing the
54
55 generated information is another responsibility an individual is supposed to follow while
56
57 using these GenAI tools. This cross-referencing is done by comparing the generated data
58
59
with multiple reputable sources to ensure reliability and accuracy. Therefore, one has a
60
61
62
63
64
65
1 12
2
3
4 responsibility to ensure that the content used in research, whether self-generated or
5
6 generated by AI tools, is not wrong, sensitive, or biased.
7
8
9 2. Ethical Use: Again, under Ethical use, one should be mindful of the societal implications
10
11 of particular research and make sure that the research does not have long-term negative
12
13 impacts on society. Secondly, one should also be cautious about assigning co-authorship
14 to these tools. This cautious approach is recommended to escape the feelings of guilt,
15
16 perceptions of cheating, and concerns about fairness and justice. In academic research,
17
18 co-authorship is a significant acknowledgment and recognition of an individual's or a
19
20 tool's contribution to the work. When researchers directly use GenAI-generated content,
21
22 they need to consider the level of contribution the AI tool made to the research. If the tool
23
24
played a substantial role, then giving co-authorship is crucial as it will help them to
25 perceptions of cheating. It is important to maintain the integrity of the research process
26
27 and ensure that authorship reflects genuine intellectual contributions (ChatGPT,
28
29 2023). {AI-Generated para: I have copy pasted the paragraph generated by ChatGPT
30
31 based on half-baked prompt I pasted in the chat box}
32
33
34 3. Universalization of the use of Generative tools in academia: AI should be employed and
35 not be disregarded entirely if it is helpful for legitimate and acceptable research, for
36
37 which the authors, not the AI, must bear full responsibility. It is inappropriate to police
38
39 writers or dictate how to approach their craft. However, the precise use of generative AI
40
41 must be disclosed in the same manner as any other tool or technique utilized in
42
43 accordance with the scientific ideal of transparency.
44
45
46 Conclusion:
47
48 The influence of generative tools on higher education, academic research, and publication is
49
50 enormous. But it's crucial to carefully weigh the ethical ramifications of new technologies in
51
52 researchers' and academics' use. Although these tools have significant advances in natural
53
54 language processing and artificial intelligence, care must be taken to utilize them ethically and
55
56 responsibly. Development will happen, AI will improve, and people will use it. Therefore,
57
58 restrictions cannot stop us, what we require is ethics and responsibility in our use.
59
60
61
62
63
64
65
1 13
2
3
4 The police and punishment can only deter the criminals; it will never stop the crimes. The guilty
5
6 person will always find better ways to steal; therefore, the best way is to change their mentality
7
8 by providing a valid rationale for “Why stealing is wrong” or, in general terms, “Why they
9
10 should follow the path of morality.” Similarly, inventing the plag-checker tools and other
11
12 technologies will never cure the problem. It is better, therefore, to adopt GenAI and provide a
13
14
rationale for “why should we use these tools ethically and responsibly?” Once we successfully
15 manage these issues, GenAI can play a significant role in the upcoming research era.
16
17
18 Funding Statement: This article did not receive any funding from external resources.
19
20 Conflict of Interest: The author has no conflict of interest related to this article.
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 14
2
3
4
5
6
7 References:
8
9 1. Alser, M., & Waisberg, E. (2023). Concerns with the Usage of ChatGPT in Academia
10
11 and Medicine: A Viewpoint. American Journal of Medicine Open, 9, 100036.
12 https://doi.org/10.1016/j.ajmo.2023.100036
13 2. Bang, J., & Park, G. (2023). Uprising of ChatGPT and Ethical Problems. Robotics & AI
14
15 Ethics, 8, 1–11. https://doi.org/10.22471/ai.2023.8.01
16 3. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions,
17
18 benefits, and challenges in higher education. International Journal of Educational
19 Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8
20 4. Duhaylungsod, A. V., & Chavez, J. V. (2023). ChatGPT and other AI Users: Innovative
21
22 and Creative Utilitarian Value and Mindset Shift. Journal of Namibian Studies : History
23 Politics Culture, 33. https://doi.org/10.59670/jns.v33i.2791
24
25 5. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E., Jeyaraj, A., Kar, A. K., Baabdullah,
26 A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-
27 Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., .
28
29 . . Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary
30 perspectives on opportunities, challenges and implications of generative conversational
31
32 AI for research, practice and policy. International Journal of Information Management,
33 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
34 6. Jo, H. (2023). Understanding AI tool engagement: A study of ChatGPT usage and word-
35
36 of-mouth among university students and office workers. Telematics and Informatics, 85,
37 102067. https://doi.org/10.1016/j.tele.2023.102067
38
39 7. Liu, Z., Roberts, R., Lal‐ Nag, M., Chen, X., Huang, R., & Tong, W. (2021). AI-based
40 language models powering drug discovery and development. Drug Discovery Today,
41 26(11), 2593–2607. https://doi.org/10.1016/j.drudis.2021.06.009
42
43 8. Lund, B., Wang, T., Mannuru, N. R., Nie, B., Shimray, S. R., & Wang, Z. (2023).
44 ChatGPT and a new academic reality: Artificial Intelligence‐ written research papers and
45
46 the ethics of the large language models in scholarly publishing. Journal of the
47 Association for Information Science and Technology, 74(5), 570–581.
48 https://doi.org/10.1002/asi.24750
49
50 9. Mhlanga, D. (2023). Open AI in Education, The responsible and ethical use of ChaTGPT
51 towards lifelong Learning. Social Science Research Network.
52
53 https://doi.org/10.2139/ssrn.4354422
54 10. Schlagwein, D., & Willcocks, L. P. (2023). ‘ChatGPT et al.’: The ethics of using
55 (generative) artificial intelligence in research and science. Journal of Information
56
57 Technology, 38(3), 232–238. https://doi.org/10.1177/02683962231200411
58
59
60
61
62
63
64
65
1 15
2
3
4 11. Wen, J., & Wang, W. (2023). The future of ChatGPT in academic research and
5
6 publishing: A commentary for clinical and translational medicine. Clinical and
7 Translational Medicine, 13(3). https://doi.org/10.1002/ctm2.1207
8
9
10
11 Bibliography:
12
13 1. Joe Trusty - CEO of Pool Marketing. (2023, April 24). The dark side of ChatGPT has
14
15 real world consequences. Medium. https://poolmarketing.medium.com/the-dark-side-of-
16 chatgpt-has-real-world-consequences-90bff03a00bf
17
18 2. Stanford University. (2023, February 14). How will ChatGPT change the way we think
19 and work? | Stanford News. Stanford News. https://news.stanford.edu/2023/02/13/will-
20 chatgpt-change-way-think-work/
21
22 3. Bohannon, M. (2023, June 8). Lawyer Used ChatGPT In Court—And Cited Fake Cases.
23 A Judge Is Considering Sanctions. Forbes.
24
25 https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-
26 and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=6fe999bc7c7f
27 4. The Impact of AI: How Artificial Intelligence is Transforming Society. (n.d.).
28
29 https://www.3dbear.io/blog/the-impact-of-ai-how-artificial-intelligence-is-transforming-
30 society
31
32 5. UNC-Chapel Hill Writing Center. (2023, September 8). Generative AI in Academic
33 Writing – The Writing Center • University of North Carolina at Chapel Hill. The Writing
34 Center • University of North Carolina at Chapel Hill. https://writingcenter.unc.edu/tips-
35
36 and-tools/generative-ai-in-academic-writing/
37 6. Palomo, A. J. (2023, October 4). Are there Benefits to Using Generative AI in the
38
39 Classroom? Medium. https://medium.com/@adrianapalomo732/are-there-benefits-to-
40 using-generative-ai-in-the-classroom-ede5ece5ae40
41 7. Dr.Q writes AI-infused insights. (2023, May 30). Ethical and responsible use of
42
43 generative AI in scholarly research. Medium. https://drqwrites.medium.com/ethical-and-
44 responsible-use-of-generative-ai-in-scholarly-research-a96b7e3cf4f
45
46 8. Bilal, M. (2023, August 6). Ethics and Artificial Intelligence: A Utilitarian perspective.
47 Medium. https://medium.com/@bilal_81623/ethics-and-artificial-intelligence-a-
48 utilitarian-perspective-
49
50 18d12409932e#:~:text=From%20a%20utilitarian%20perspective%2C%20the,negative%
51 20impacts%20on%20some%20groups.
52
53
54
55
56
57
58
59
End….
60
61
62
63
64
65
Blinded Manuscript Click here to view linked References

1 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
AI in Academia: Striking the Ethical Balance
27
28 and Collaborative Potential
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 2
2
3
4
5 Abstract:
6
7 AI tools' rapid development and deployment affect our social institutions and daily lives. In my
8
9 project, I aim to evaluate the use of Generative Artificial Intelligence (GenAI, hereafter) tools in
10
11 academia. Due to the easy access and convenience, these tools gained rapid popularity among
12
13 students as they can generate context-specific content based on given prompts. Student's
14
15 reliability on these tools is burgeoning as these models are very good at assisting students with
16
17
academic tasks. Our question of concern is, ‘Who is wrong? Students as they are using these
18 Generative tools or the one who is evaluating?’ ‘Why using such tools is a matter of ethics?’
19
20 ‘Why using GenAI tools in research projects is ethically wrong?’ ‘Why do we have tools like AI
21
22 plagiarism checker, Turnitin, etc?’ ‘Are they guilty?’ ‘Are they hurting anybody?’ ‘What can we
23
24 do about it?’ ‘Do we have an alternate way?’ ‘Can we use these GenAI tools without feeling
25
26 guilty?’ ‘Can we bring out a productive collaboration between academia and the AI industry?’
27
28 Keywords: GenAI, ChatGPT, Ethics, Responsibility
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 3
2
3
4
5 Introduction:
6
7 The deployment of AI has become really important, as its potential to improve efficiency,
8
9 accuracy, and reduced costs has revolutionized almost all the fields of human society, including
10
11 finance, healthcare, education, gaming, and more. It is almost everywhere, and our ignorance of
12
13 its adaptability is useless. It has changed how we interact with our society. AI-powered
14
15 technologies like computer vision, image and audio recognition, and natural language processing
16
17
have entirely transformed how we engage with and consume media.
18
19 In this paper, I will specifically explore the impacts of Generative Artificial Intelligence in
20
21 academics and its deployment in our society. When we talk about Generative AI, one of the
22
23 significant contributions was made by OpenAI by introducing GPT-3.5 (the latest model of
24
25 ChatGPT). There are other mind-blowing inventions, like Perplexity AI (language model); for
26
27 image generation, we have Dell-E, mid-journey; for sound, we have AIVA, Murf.ai; for coding
28 CodeT5, Tabnine, etc. Since we are concerned with academic writing, I will explore the impacts
29
30 of Language models. Consider, for instance, GPT-3 and ChatGPT; both of these models use
31
32 NLP and generate a significant amount of data based on users' requests and prompts; they have
33
34 garnered significant attention and have revolutionized a wide variety of language-related tasks,
35
36 be it poems, summaries, open-ended prompts, citation creations. Both were invented by OpenAI
37
38
in 2015 with the aim of common good.
39
40 And for the same reason, the laboratory has received significant support from Microsoft
41
42 Corporation and Elon Musk. Consequently, the laboratory development made another vital
43
44 contribution as they introduced DALL-E. DALL-E is an application of machine learning that
45
46 creates pictures depending on inputs received from users. It understands and generates novel
47
48
images using Artificial Neural Networks with multimodal neurons. ChatGPT and DALL-E
49 received tremendous public popularity within the seven days of their release. But, given the
50
51 consequences, the question is, was it really for the common good?
52
53
54 GenAI: Costs and Benefits
55
56
57 Generative Technology presents opportunities as well as, ethical and legal, challenges and
58
59 possesses both the potentials, that is to have positive as well as negative impacts on individuals,
60
61
62
63
64
65
1 4
2
3
4 societies and organizations. On the one hand we have decent benefits of using Language Models
5
6 like, it could be used to save researchers time, and their expenses by composing descriptions of
7
8 their discoveries and by formatting the paper as per users requirements or journals guidelines, it
9
10 can also be used to revise and clarify users/authors content, it can be used as a conversational
11
12 resources, as an idea generator or brainstorming system, where you are not relying on it for
13
14
content rather using it as an expert with whom you are discussing the idea to make it clearer, it
15 can also be used as a recommender system, i.e. to identify relevant research studies. On the other
16
17 hand there are major costs of using these generative tools in research like; There have been
18
19 various concerns regarding academic integrity, plagiarism, ethics, and the limitations of GenAI
20
21 have been raised. AI-generated answers to academic writing assignments reveal that while the
22
23 content was largely unique and pertinent to the subjects, it also lacked human viewpoints and
24
25
excessively cited inappropriate sources. It is also difficult to construct appropriate prompts for
26 second language learners since it demands a certain level of language proficiency. Additionally,
27
28 relying too much on GenAI tools can undermine students' sincere attempts to become proficient
29
30 writers. Furthermore, if the dataset used to train a model contains aspects that are biased,
31
32 erroneous, or dangerous, then it will affect the generated materials for sure. It means the output
33
34 will have some effect of the inputs (i.e. fed data). Thus, the widespread application of GenAI
35
may endanger academic integrity.
36
37
38 Costs Benefits
39
40 Might Consume personal or private Brainstorming/exploring the instructor’s
41 information (text or images) prompt
42
43
Citations and quotes may be invented Generating outlines
44 Responses may contain biases Models of genres or types of writing
45 Might generate plagiarized content Summarizing longer texts
46
47 Information may be false Editing/refining
48 While Paraphrasing, ideas may be changed Translation
49
50 unacceptably
51
52 Table 1: Costs and benefits of GenAI
53
54
55
56 However, these costs and benefits cannot hide the ongoing adaptation of language models and
57
58 different AI tools in society. People are using these models for almost all of their concerns, to
59
60
generate summaries, to write online blogs, and academic write-ups regardless of inner biases,
61
62
63
64
65
1 5
2
3
4 inaccurate information, etc. Now, my agenda here is to point out that the growing use of these
5
6 technologies will continue, and the problems we are concerned with will vanish with the
7
8 improved data sets in the future. Therefore, apart from the fundamental issues, we need to bring
9
10 ethics and morals into our use of these technologies. Because blind and excessive use can lead to
11
12 other problems, like physical harm to a larger community, development of bad intentions and
13
14
GenAI polarized content, etc. Therefore, we need some principles that can help us form a
15 collaborative approach that can allow students to use GenAI in their academic projects, essays,
16
17 research, etc, without any sense of guilt.
18
19
20 But before we discuss the parameters and principles of ethical use of these GenAI tools, we need
21
22 to answer the why question. Why do we need ethics in the first place? Of what use is this
23
24
responsibility? Why do we need boundaries in our use of GenAI? Can’t we be limitless? Are
25 ethics and responsibility essential? If yes, why? To answer this, we must answer the primary
26
27 question: “Why Morality and Ethics are necessary?” “Why should one be moral, ethical, and
28
29 responsible?” because, I assume, the answer to that question will automatically answer ‘Why do
30
31 we need to use GenAI ethically and responsibly?”.
32
33
34 Significance of Ethics and responsibility
35
36
37 The primary reason that makes sense for the significance of boundaries, i.e., the importance of
38 ethical and moral use of GenAI tools in academia, is to escape the feeling of guilt, remorse, and
39
40 the feeling of being unjust. Now, one thing is clear: using GenAI tools in academia does not
41
42 cause any major harm to others physically or psychologically, but is it really the case?
43
44
45 There is no direct harm, but indirectly, mindless use of GenAI tools in academia can cause
46
47 significant damage. The misinformation compiled by these GenAIs has drawbacks and is
48
49 particularly harmful to academic publication since, as we all know, advancement depends on the
50 dissemination of accurate knowledge. There is a significant danger of injury when inaccurate
51
52 facts are presented in a scientific environment. For example, if a research aims to elaborate on
53
54 how individual and communities’ health problems are managed and cured, and for that matter, if
55
56 the researcher mindlessly used language models and statistical data generated by GenAI tools.
57
58 Since this research is later applicable to larger communities, it can affect their health and life.
59
60
61
62
63
64
65
1 6
2
3
4 Consequently, if we aim to avoid such consequences, we have to mindfully accept the
5
6 responsibility not only in our use but towards the community as well. Similarly, it can also affect
7
8 communities and even individuals if there is any type of bias in the content generated by GenAI;
9
10 these types can be either data-based or tool-based. And it can potentially affect huge
11
12 communities based on certain assumptions or prejudices. Just a minor glitch: ignorance can lead
13
14
to huge problems.
15
16 Utilitarianism Vs Deontological Ethics: A perspective
17
18
19 Given the negative impact, consequentialists will never trade communities for individual
20
21 benefits. But one the other hand, it carries greater benefits and potential to improve students'
22
23 creative and innovative skills. Utilitarianism, as a consequentialist, only cares about the outcome
24
25 of an action regardless of the procedure. While it is nice to have a fair procedure, improvements
26
27 in the pertinent outcomes are what ultimately count as ethically good. Good measures are those
28 (like using AI) that enhance the related products (like art aesthetics).
29
30
31 Recently, a survey was conducted by Arvin V. Duhaylungsod and Jason V. Chavez to evaluate
32
33 the utilitarian benefit of employing ChatGpt—a generative tool—in research projects. According
34
35 to the participants' comments, there are several advantages to using ChatGPT and AIs in
36
37 technology-based learning for creative and imaginative tasks. One participant mentioned that
38 ChatGPT saves his time by allowing him to produce ideas while using reliable sources instead of
39
40 reading articles on Google Scholar. According to the data acquired, students have an edge while
41
42 using ChatGPT and AIs to generate content for class projects and turn in unique and original
43
44 work. To understand the significance of GenAI in academia, I have quoted students response
45
46 below;
47
48
49
 Student1: "For inventiveness in terms of essentially applying, in case you're looking for
50 prompt answers to your inquiries. You have to read every piece or the entire article to
51
52 find what you're looking for in some reliable sources, like Google Scholar, so you can't
53
54 just find the answer for specific literature. "
55
56  Student 4: "I believe that being creative can aid in developing your ideas. I am, therefore,
57
58 working on my thesis. Because using GenAI can help you develop and extract additional
59
60
61
62
63
64
65
1 7
2
3
4 ideas from what you want to perform in your research, with that I am beginning to
5
6 construct my thesis for my fourth year of study. "
7
8  Student 5: "In my opinion, applying AI models to research or thesis projects could foster
9
10 innovative thinking. In my opinion, it can bring your thoughts to their most excellent
11
12 forms. "
13
14
15
Twenty respondents concurred that using GenAI to complete tasks allows them to improve their
16 capacity for creativity and innovation. Five participants reported that ChatGPT improved their
17
18 ability to generate ideas, construct words, and broaden their knowledge on a specific subject. No
19
20 doubt, it has the potential to generate ideas and develop things that are otherwise not possible.
21
22 It's like having a conversation with some expert. " Consequently, one can divide the utility of
23
24 using GenAI into two parts; first, benefits, defined as the functional advantage or usefulness
25
derived from using AI chatbots to serve individual purposes. On the other hand, the individual
26
27 impact, such as improved task performance or productivity enhancement, pertains to task-
28
29 oriented objectives.
30
31
32 It is impossible to overlook, nevertheless, the fact that it also restricts one's capacity for
33
34 innovation and creativity. One explanation for this is that they become so dependent on these
35
36
tools that they lose track of their ability to come up with original ideas. Furthermore, ten
37 respondents emphasized that it is not a good idea to rely too much on AIs because they cannot
38
39 supply precise information for your references in academic texts.
40
41
42 However, if we evaluate this problem from the Kantian Ethics perspective, it seems to reject
43
44 such use. Deontological Ethics is based on core philosophical principle that prioritizes an action's
45
46
process over its outcome. He prefers the means over ends, which means Kantian Ethics is not
47 concerned with the consequences; instead, he emphasizes the means used to achieve the goals.
48
49 For him, moral actions followed by an individual must have their basis in a priori principles or
50
51 the maxims of actions. In principle, Kant was more concerned with the quality of the intention
52
53 behind an individual's actions. He was more concerned with an action's intrinsic moral worth,
54
55 regardless of its consequences. This doctrine emphasizes more on individual duties, rules, and
56
the inherent moral nature of actions of those actions. Now, the obligatoriness of an action
57
58 depends on these principles, and an action, regardless of its consequences, will never be
59
60
61
62
63
64
65
1 8
2
3
4 permissible if it goes against these principles. For instance, Breaking Promises is wrong
5
6 irrespective of any costs/benefits it results in.
7
8
9 Considering this perspective, the core Deontological ethics will never allow the use of GenAI
10
11 tools in academic research, even if it is generating better results and helping students to produce
12
13 better research outcomes. The process is not authentic; the content is copied. It's like one is
14 stealing not only content but sometimes ideas, too. Such a procedure also violates the third
15
16 maxim of Kant's Categorical Principle: "Do Not Use Other as Means." Here, we are not using
17
18 any human being as such; it's just an AI tool, primarily an instrument only; it was invented to be
19
20 used as an instrument only. Therefore, I believe this argument does not have much force. But
21
22 then why doesn't it feel right? Why does it look bad and ethically wrong to rely on such tools?
23
24 Since for Kantian Ethics, It is the process that matters and not the outcome. Can we really award
25
26 a prize to someone who skipped the essential process of such outcomes? The ethical integrity of
27
28 art connects to the artists and the process. Just think of two artists who participated in a painting
29
30 competition; one used his imagination, skills, and experience to produce a certain piece, whereas
31
32 the other just typed some prompts on DELL-E and got it generated. Who is more eligible for the
33
34 prize?
35
36
37
Should we really judge the use of AI on deontological or teleological grounds?
38
39 The objection proposed by some researchers is built on the principle of "Non-auditability."
40
41 Operating on the neural network, OpenAI's, ChatGPT presents an advanced capacity but also a
42
43 hurdle known as the "black box" problem: while we may comprehend the broad principles of the
44
45 model, the rationale behind individual decisions remains perplexing. The human brain served as
46
47 the model for the neural network, which offers a dynamic, adaptable framework that records
48 patterns but lacks an auditable, comprehensible formula.
49
50
51 We assume, for the purposes of our argument, that using AI has some quantifiable advantages.
52
53 For instance, speeding up writing can increase researcher productivity in science and research.
54
55 However, there are also areas of society where decisions are made based on the principles of
56
57 deontology, and there are others where decisions are made on teleological grounds in a clear and
58
reasonable manner. Consequently, to make decisions about AI, we should follow specific instead
59
60 of general guidelines. For instance, it doesn't seem reasonable to reject the employment of AI
61
62
63
64
65
1 9
2
3
4 outright in any situation based alone on the aforementioned "black box" (non-auditability) issue.
5
6 Instead, the inquiry is: Is auditability a requirement inherent in the relevant domain? Let us
7
8 demonstrate this with two thought experiments.
9
10
11 Sports is one sector of society where the majority of people concur that human performance is
12
13 what matters most. The main focus of a football game is not on scoring as many goals as
14 possible; instead, it is on how solid teams and individual players perform in a "fair" match.
15
16
17 Thought Experiment 1: Your rival at the Olympics shows up for the 100-meter sprint finals
18
19 wearing a prototype robot leg controlled by artificial intelligence that surrounds his real legs.
20
21 Your opponent claims that since this gives them the ability to sprint the 100 meters twice as
22
23 quickly as you do—finishing in 5 seconds as opposed to 10—it is an ethically permissible
24 upgrade. Do you think your opponent's employment of this AI-robotic improvement during the
25
26 Olympics is morally righteous and equitable?
27
28
29 Now, if your belief aligns with the generally acceptable principle that a sports game is all about
30
31 playing fair and better than your opponent, in that case, the above thought experiment indeed
32
33 violates the fair play criteria. You'll probably conclude that using AI in this manner is unethical
34
since it is unjust and "against the Olympic spirit."
35
36
37 On the other hand, most people agree that in the medical field, results come first. The medical-
38
39 ethical theory of "beneficence" holds that physicians have a moral obligation to advocate for the
40
41 course of treatment that best serves the patient's interests in terms of health, survival rates, etc.
42
43
44
Thought Experiment 2: It has been determined that your child has a cardiac issue that requires
45 immediate, high-risk surgery. There are two alternatives accessible at the university hospital: one
46
47 is having your baby operated on by the most skilled human surgeons available, with a 40%
48
49 success rate. Or, an AI surgeon with an 80% survival rate may be used as an alternative. Do you
50
51 think it is morally right for the medical center to provide the AI alternative and for you to decide
52
53 which is better for your baby's chances of survival?
54
55 Now again, if your sentiments align with general medical principles, you will choose the course
56
57 of action that will increase the likelihood of your baby's survival with successful heart surgery.
58
59 This is true even if you, as the parent, are aware of the black box problem and other possible
60
61
62
63
64
65
1 10
2
3
4 "unethical" viewpoints on the same, such as the loss of human knowledge, the job losses of
5
6 medical professionals, or the ethics involved in developing such AI surgeons. It's not immoral of
7
8 you to put your child's survival above these worries. Similarly, you would undoubtedly think it
9
10 was immoral for medical professionals to deny the AI option for your infant just because they
11
12 were afraid of losing their jobs.
13
14 The experiments mentioned above threw us toward the two opposing conclusions. In some
15
16 scenarios (which were earlier handled by humans), it is ethical to use AI tools, but not in others.
17
18 This is not contradictory or opposing; instead, it is because we judge various social domains
19
20 using different criteria, depending on what's considered suitable in each case. Consequently, we
21
22 should evaluate the deployment of AI tools using different standards. On the one hand, Human
23
24
accomplishment in athletics is recognized for the commitment, resiliency, talent, and
25 representation of their efforts. Artificial Intelligence (AI) would undermine these principles by
26
27 prioritizing technology gain over human ability. On the other hand, patient outcomes are crucial
28
29 while receiving medical care. If artificial intelligence (AI) provides real advantages in drug
30
31 research, diagnosis, or therapy, then it should be applied.
32
33
34 Now the question is, how should we deal with our problem? How can we evaluate the use of
35 GenAI tools in academics? Is it more akin to the sports scenario or the medical scenario?
36
37
38 It may seem apparent to those outside of academia that science and research, which are
39
40 endeavors aimed at creating new knowledge, should be outcome-driven. Our goal is for
41
42 science—frequently supported by government dollars—to create knowledge. As a result, it is
43
44 better in medical scenarios to prioritize the "best outcomes" instead of the "fair process." In other
45 words, science's contribution to society is the effective spread of information. Even with
46
47 measurements and rankings, which are presently commonly employed, science is not about
48
49 academics, institutions, or publications competing in a way akin to sports. Therefore, it is not
50
51 "unfair" if author A uses generative AI to write 20% quicker than author B. Rather, it appears
52
53 that by not utilizing the finest available tools and technology, B is losing out on 20% of their
54
55 potential productivity—time that could be better spent.
56
57 Given its social function in our field of study and science, we believe that a teleological
58
59 evaluation of ethical usage needs to be given precedence and that artificial intelligence ought to
60
61
62
63
64
65
1 11
2
3
4 be applied where it may be a valuable instrument for knowledge advancement. However, one
5
6 should not use these tools mindlessly. Therefore, instead of preferring one theory over other, I
7
8 would suggest a collaborative balance between these two theories, i.e. a necessary combination
9
10 of consequentialist and deontological ethical frameworks. Deontological ethics will emphasize
11
12 the morality of deeds regardless of their consequences since it is based on principles and
13
14
obligations. This means that during the study process, one must adhere to established standards
15 and ethical criteria with steadfast devotion in the framework of GenAI. This entails maintaining
16
17 impartial representation, protecting privacy, and openly recording the methods. On the other
18
19 hand, a consequentialist viewpoint demands that the possible effects of using GenAI in research
20
21 be carefully considered. Researchers have to balance the advantages—such enhanced
22
23 productivity and novel perspectives—against the disadvantages, including biases and unforeseen
24
25
repercussions. When using GenAI, it's important to strike a harmonious equilibrium between
26 deontological doctrines and consequentialist thinking by considering both the wider social
27
28 ramifications and ethical norms. This strategy makes sure that the creation and application of
29
30 GenAI in research publications advances knowledge while respecting the core values of moral
31
32 behavior and accountability. Therefore, we need collaboration, some mutually agreed principles
33
34 that promote an ethical, responsible, and sensible approach that allows GenAI use in academia
35
and research papers.
36
37
38 The principles: An ethical and responsible approach
39
40
41 In recognition of this, I propose the following principles for the collaborative engagement of
42
43 Humans and AI in academic research writings because it is not inherently unethical.
44
45
46 1. Sense of Responsibility: Sometimes, datasets used to train these generative models are
47
48
not anonymous; therefore, they might include sensitive data that one should not use in
49 research. Consequently, one should respect privacy and make sure that they are not using
50
51 AI-generated data directly. Additionally, due to specific data sets, these GenAIs have the
52
53 potential to generate wrong and biased content. Therefore, cross-referencing the
54
55 generated information is another responsibility an individual is supposed to follow while
56
57 using these GenAI tools. This cross-referencing is done by comparing the generated data
58
59
with multiple reputable sources to ensure reliability and accuracy. Therefore, one has a
60
61
62
63
64
65
1 12
2
3
4 responsibility to ensure that the content used in research, whether self-generated or
5
6 generated by AI tools, is not wrong, sensitive, or biased.
7
8
9 2. Ethical Use: Again, under Ethical use, one should be mindful of the societal implications
10
11 of particular research and make sure that the research does not have long-term negative
12
13 impacts on society. Secondly, one should also be cautious about assigning co-authorship
14 to these tools. This cautious approach is recommended to escape the feelings of guilt,
15
16 perceptions of cheating, and concerns about fairness and justice. In academic research,
17
18 co-authorship is a significant acknowledgment and recognition of an individual's or a
19
20 tool's contribution to the work. When researchers directly use GenAI-generated content,
21
22 they need to consider the level of contribution the AI tool made to the research. If the tool
23
24
played a substantial role, then giving co-authorship is crucial as it will help them to
25 perceptions of cheating. It is important to maintain the integrity of the research process
26
27 and ensure that authorship reflects genuine intellectual contributions (ChatGPT,
28
29 2023). {AI-Generated para: I have copy pasted the paragraph generated by ChatGPT
30
31 based on half-baked prompt I pasted in the chat box}
32
33
34 3. Universalization of the use of Generative tools in academia: AI should be employed and
35 not be disregarded entirely if it is helpful for legitimate and acceptable research, for
36
37 which the authors, not the AI, must bear full responsibility. It is inappropriate to police
38
39 writers or dictate how to approach their craft. However, the precise use of generative AI
40
41 must be disclosed in the same manner as any other tool or technique utilized in
42
43 accordance with the scientific ideal of transparency.
44
45
46 Conclusion:
47
48 The influence of generative tools on higher education, academic research, and publication is
49
50 enormous. But it's crucial to carefully weigh the ethical ramifications of new technologies in
51
52 researchers' and academics' use. Although these tools have significant advances in natural
53
54 language processing and artificial intelligence, care must be taken to utilize them ethically and
55
56 responsibly. Development will happen, AI will improve, and people will use it. Therefore,
57
58 restrictions cannot stop us, what we require is ethics and responsibility in our use.
59
60
61
62
63
64
65
1 13
2
3
4 The police and punishment can only deter the criminals; it will never stop the crimes. The guilty
5
6 person will always find better ways to steal; therefore, the best way is to change their mentality
7
8 by providing a valid rationale for “Why stealing is wrong” or, in general terms, “Why they
9
10 should follow the path of morality.” Similarly, inventing the plag-checker tools and other
11
12 technologies will never cure the problem. It is better, therefore, to adopt GenAI and provide a
13
14
rationale for “why should we use these tools ethically and responsibly?” Once we successfully
15 manage these issues, GenAI can play a significant role in the upcoming research era.
16
17
18 Funding Statement: This article did not receive any funding from external resources.
19
20 Conflict of Interest: The author has no conflict of interest related to this article.
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
1 14
2
3
4
5
6
7 References:
8
9 1. Alser, M., & Waisberg, E. (2023). Concerns with the Usage of ChatGPT in Academia
10
11 and Medicine: A Viewpoint. American Journal of Medicine Open, 9, 100036.
12 https://doi.org/10.1016/j.ajmo.2023.100036
13 2. Bang, J., & Park, G. (2023). Uprising of ChatGPT and Ethical Problems. Robotics & AI
14
15 Ethics, 8, 1–11. https://doi.org/10.22471/ai.2023.8.01
16 3. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions,
17
18 benefits, and challenges in higher education. International Journal of Educational
19 Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8
20 4. Duhaylungsod, A. V., & Chavez, J. V. (2023). ChatGPT and other AI Users: Innovative
21
22 and Creative Utilitarian Value and Mindset Shift. Journal of Namibian Studies : History
23 Politics Culture, 33. https://doi.org/10.59670/jns.v33i.2791
24
25 5. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E., Jeyaraj, A., Kar, A. K., Baabdullah,
26 A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-
27 Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., .
28
29 . . Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary
30 perspectives on opportunities, challenges and implications of generative conversational
31
32 AI for research, practice and policy. International Journal of Information Management,
33 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
34 6. Jo, H. (2023). Understanding AI tool engagement: A study of ChatGPT usage and word-
35
36 of-mouth among university students and office workers. Telematics and Informatics, 85,
37 102067. https://doi.org/10.1016/j.tele.2023.102067
38
39 7. Liu, Z., Roberts, R., Lal‐ Nag, M., Chen, X., Huang, R., & Tong, W. (2021). AI-based
40 language models powering drug discovery and development. Drug Discovery Today,
41 26(11), 2593–2607. https://doi.org/10.1016/j.drudis.2021.06.009
42
43 8. Lund, B., Wang, T., Mannuru, N. R., Nie, B., Shimray, S. R., & Wang, Z. (2023).
44 ChatGPT and a new academic reality: Artificial Intelligence‐ written research papers and
45
46 the ethics of the large language models in scholarly publishing. Journal of the
47 Association for Information Science and Technology, 74(5), 570–581.
48 https://doi.org/10.1002/asi.24750
49
50 9. Mhlanga, D. (2023). Open AI in Education, The responsible and ethical use of ChaTGPT
51 towards lifelong Learning. Social Science Research Network.
52
53 https://doi.org/10.2139/ssrn.4354422
54 10. Schlagwein, D., & Willcocks, L. P. (2023). ‘ChatGPT et al.’: The ethics of using
55 (generative) artificial intelligence in research and science. Journal of Information
56
57 Technology, 38(3), 232–238. https://doi.org/10.1177/02683962231200411
58
59
60
61
62
63
64
65
1 15
2
3
4 11. Wen, J., & Wang, W. (2023). The future of ChatGPT in academic research and
5
6 publishing: A commentary for clinical and translational medicine. Clinical and
7 Translational Medicine, 13(3). https://doi.org/10.1002/ctm2.1207
8
9
10
11 Bibliography:
12
13 1. Joe Trusty - CEO of Pool Marketing. (2023, April 24). The dark side of ChatGPT has
14
15 real world consequences. Medium. https://poolmarketing.medium.com/the-dark-side-of-
16 chatgpt-has-real-world-consequences-90bff03a00bf
17
18 2. Stanford University. (2023, February 14). How will ChatGPT change the way we think
19 and work? | Stanford News. Stanford News. https://news.stanford.edu/2023/02/13/will-
20 chatgpt-change-way-think-work/
21
22 3. Bohannon, M. (2023, June 8). Lawyer Used ChatGPT In Court—And Cited Fake Cases.
23 A Judge Is Considering Sanctions. Forbes.
24
25 https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-
26 and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=6fe999bc7c7f
27 4. The Impact of AI: How Artificial Intelligence is Transforming Society. (n.d.).
28
29 https://www.3dbear.io/blog/the-impact-of-ai-how-artificial-intelligence-is-transforming-
30 society
31
32 5. UNC-Chapel Hill Writing Center. (2023, September 8). Generative AI in Academic
33 Writing – The Writing Center • University of North Carolina at Chapel Hill. The Writing
34 Center • University of North Carolina at Chapel Hill. https://writingcenter.unc.edu/tips-
35
36 and-tools/generative-ai-in-academic-writing/
37 6. Palomo, A. J. (2023, October 4). Are there Benefits to Using Generative AI in the
38
39 Classroom? Medium. https://medium.com/@adrianapalomo732/are-there-benefits-to-
40 using-generative-ai-in-the-classroom-ede5ece5ae40
41 7. Dr.Q writes AI-infused insights. (2023, May 30). Ethical and responsible use of
42
43 generative AI in scholarly research. Medium. https://drqwrites.medium.com/ethical-and-
44 responsible-use-of-generative-ai-in-scholarly-research-a96b7e3cf4f
45
46 8. Bilal, M. (2023, August 6). Ethics and Artificial Intelligence: A Utilitarian perspective.
47 Medium. https://medium.com/@bilal_81623/ethics-and-artificial-intelligence-a-
48 utilitarian-perspective-
49
50 18d12409932e#:~:text=From%20a%20utilitarian%20perspective%2C%20the,negative%
51 20impacts%20on%20some%20groups.
52
53
54
55
56
57
58
59
End….
60
61
62
63
64
65

You might also like