Ethical Report
Shiyu LEI
Part A:
Out of all the problems that pose by the growth of ChatGPT or other generative AI
tools, the unethical use is one of the major issues, which can be summarized into
many categories: it can be “deep research fake”; or privacy breaches; or gender bias
in ChatGPT. The range of people this problem affected can be individual or the
whole society. For instance, the whole academic field would be severely impacted by
the deep research fake, including the knock-on effects that affect future research
(Grimes et al., 2023). Defined by the nature of Generative AI, ChatGPT uses deep
learning and machine learning algorithms and generate conversational text
responses (Wach et al., 2023). though it is only accessed through web and
electronic devices, However, due to its dialogue-based nature and its ability to import
any data into its database, users can input requests into ChatGPT and get a quick
response, and this introduces the thesis to be discussed in this article: the unethical
use of data, and the dangers of these behaviours.
Part B:
political 1. ChatGPT may be used to generate political
propaganda, or spreading misinformation.(Motoki et.al,
2024)
2. ChatGPT is inherently politically biased: it favours a few
political parties, and people may be influenced if they
use ChatGPT to decide the votings during the election
phase.
3. ChatGPT may be used to produce deep fake pictures to
suit the interest of one political party, take example of
geopolitical conflict happens in Gaza,
economic 1. For finance market, many people are using ChatGPT to
make investment decisions, which may cause the
financial decisions being identical and cause conflict in
the financial market (Khan et.al, 2024).
2. For firms, many of them may be using generative AI for
planning their finance or investment plans, yet the
results the model generated may be wrong or biased
and may lead to financial crisis.
social 1. Worsen Gender bias: the data of ChatGPT is outdated,
it may further solidified the perception of the outdated
gender and gender bias. (Gross, 2023)
2. Generative AI may replace content creator and other
workers with repetitive labour, this may lead to social
conflict as unemployment rate rises
3. Due to the lack of privacy protection, the personal
information may be leaked to the web, which cause
social confidential crisis
technological 1. The development of generative AI enables the creation
of “deep research fake”, which may hinder the healthy
development of scientific research as the number of
fake documents or data would increase, and the fare
competition would be jeopardize. (Grimes, et.al; 2023)
2. Due to lack of intellectual property protection, many
scientists may lose their initiative for innovation
environmenta Energy consumption and greenhouse gas emission: the
l energy required to train and run the model is enormous,
which can cause tons of greenhouse gas being produced.
legal The intellectual property rights are being violated, as
ChatGPT use documents without asking for permission or
reference.
For political, the main problem caused by the unethical issue is ChatGPT is
politically biased, and it may produce unfavoured political consequences. Also, it
may be used as a tool for political propaganda, because ChatGPT has the nature of
an non-transparent process of generating results, it can easily be used as a
propaganda tool in geopolitical conflict because it is hard to check credential of the
generated images(Motoki, et.al 2024).
For economic and social perspective, according to the article written by Khan
(2023), the application of ChatGPT in finance pose a threat to the asset markets as
false information and wrong models are used in decision making, and due to lack of
decision transparency or system of accountability, when there are more and more
analysts using AI to generate investment models, a crisis could occur. Also, for
social perspective, the flouring of AI would also replace human workforce, as
companies no longer need these many employees, which would lead to social crisis
as unemployment rate plumet.
In terms of technological perspective, the development of ChatGPT make it easier
to fake research results, as Lack of transparency makes it difficult for academics to
check the reliability and validity of the results generated. As it is stated in Grimes’s
(2023) article, many would fake their research article for scholarships, and ChatGPT
have make it easy for them to fake their research results. If this phenomenon
becomes widespread over time, it will affect the healthy development of academia
and hinder scientific and technological development.
Part C:
There are three main ethical problems identified from the PESTEL model, which
are “deep research fake”, “fake comments”, “privacy breaches”. To better analyse
the resulting problems to all stakeholders and possible action for open AI, this part
will be analysis by using step 4 of the 7-step ethical framework, combined with UN
SDGs to demonstrate, and to illustrate why it is crucial for open AI to act quickly and
accordingly.
1) deep research fake
Due to the non- transparent nature of the large language models, this has created
the possibility of academic fakes, and has caused widespread concern in the
academic community about the authenticity of research content and results, thus
hinder the normal development of research. The deep fake refers to the use of
generative ai to self create and manipulate data to obtain the results of a study that
one desires (Grimes et.al., 2011). the purpose of faking research results are often
related to monetary incentives or fame, which violate the UNSDGS's 4th principle:
quality education, universities have the responsibility to ensure that every student
receives fair and quality education, but academic fraud is a violation of integrity,
depriving equal competition, and affecting the academic environment, increasing the
number of withdrawals from submitting journals and damaging the reputation of
journals as well as the university.
Regarding these problems, the first thing is to identify the key stakeholder. First is
the researchers, the second is the publishers, the other is open AI, and school
faculty may also be in stake as well.
2) fake reviews
another problem associated with Generative AI is fake reviews, as reviews are
important part that influence customer purchase, Fake reviews not only undermine
the trust of the customer in the company but can also cause customer lose faith in
online shopping platforms, resulting in a major impact on a company's credibility and
revenue. The development of Generative AI such as ChatGPT has made reviews
more authentic, thus making the problem worse and affecting countless businesses
and consumers. (Shukla et.al, 2024) Fake reviews disrupt the normal market order
and undermine the trust of customers, affecting the market's sustainable
development. And this is a violation of the 8th goal of the SDGS: decent work and
economic growth, because fake reviews may jeopardize the online shopping market,
chasing small profit while losing the big game of a sustainable growing economy.
While platforms such as Amazon have taken steps to deal with fake reviews, these
efforts were unsuccessful.
3) privacy breaches
One of the main ethical issues related to ChatGPT is the privacy violation. The
data related to users are used by open AI to create synthetic data that can be used
to identify individuals or groups of people. And these collected data are then sold to
other merchandise companies, which enable them to better target market customers
and create advertisements. (Schäfer et.al, 2023) and this is a direct breach of the
16th goal: Peace, Justice, and Strong Institutions, as the consumers share their rights
to personal justice and individual rights of privacy.
4) Ethical framework
A common factor in all three issues is the use of customer data collected by
ChatGPT, which by its very nature is a violation and misuse of customer information.
The solution first and foremost is to prioritize user privacy and security. This includes
implementing strong data protection measures, ensuring transparency in the use of
personal data, and regularly evaluating and updating security protocols. And setting
up regulatory teams, with different stakeholder representatives on each team, to
ensure that the rights and interests of each stakeholder are met. The consequences
of the actions are Firstly, the level of satisfaction rate for the product will increase,
and secondly business entities' reputation will also be improved, while the
transaction volume will grow; for the universities or journals, the academic reputation
and credibility will be improved, and the journal doesn't need to review each article
too strictly as the number of fake research papers will drop significantly, which also
reduces the number of retractions and help maintains the fair competition in the
academic world.
For open AI, securing customer data is also crucial for the company’s own
development. Firstly, from the perspective of possible fines for companies that
misuse big web data, according to the Constitutional Law, companies that violating
their customers'' privacy, may be liable to pay an administrative fine of 4% of
company revenue of a minimum of 20 million (Schäfer et.al, 2023). Secondly, from
the revenue perspective, if a company constantly has concerns about possible
lawsuit issues from the customer's privacy perspective, it will only affect the
company's further innovations afterwards and affect the company's revenue.
Thus, when the decision to respect customers' privacy becomes the company's
global execution goal, the company can implement a globally uniform privacy policy
and without worrying about whether its operation violates the laws and regulations of
the country or whether there is a possible lawsuit, the company can invest more
energy in its main business scope such as technology innovation.
Part D
As mentioned in part c, the main solution for open AI is to protect customer
privacy, which can be done by holding stakeholder meetings regularly, and mostly
importantly, implementing a much stricter data protection strategy(Schäfer et.al,
2023) However, the risks still exist for the company, for example, due to its machine
learning and AI inherent, it is very possible to still face heavy regulations around the
world, such as Italy and China, it may still be banned in these countries, as they
have a more conservative opinion regarding AI. And, when implementing the new
rules, there may be pushback from the board, as some of them may place revenue
above all else. Then it is important to hold board meetings and talk about how
sustainable growth can benefit the company from the long term. And also, when
implementing the global strategy of securing customer data, it is very possible that
the data security law varies from one to another, it is certain that the regulation will
be stricter in the EU compares to some developing countries, thus applying one data
protection strategy may be difficult, and coming up with individual plans for each
country may be costly, and the bill may also not be approved, these are all the
potential risks. Yet, by proposing the protection strategy, it is already a nice start to
further the goal of sustainable business. And it will be great to start from more
developed countries, if the regulation passes in these countries, it will be much
easier to implement in less legislative countries.
Part E
After finshing the article, the first thing I learn from writing the article is how to plan
or structure the article, we firstly need to identify the problem, and construct a clear
framework for the audience to see, then, we can construct further analysis on others.
And working on this article has changed the process I normally write the article, in
order to identify the problem and solutions, I firstly need to find all the reference, and
to construct the article accordingly. Previously, when I write the article, i find the
reference after by according to my article, but later I realize this would take a lot of
time to just find the reference that is related to mine and may also be poorly
organized. Also, when writing this article, I learned that a good article should be
interlinked, the paragraph above should provide necessary information and link to
the thesis on the next.
Reference list
Grimes, M, von Krogh, G, Feuerriegel, S, Rink, F & Gruber, M 2023, “From Scarcity
to Abundance: Scholars and Scholarship in an Age of Generative Artificial
Intelligence,” Academy of Management journal, vol. 66, no. 6, pp. 1617–1624.
viewed 29 March 29, 2024.
Gross, N 2023, “What ChatGPT Tells Us about Gender: A Cautionary Tale about
Performativity and Gender Biases in AI,” Social sciences (Basel), vol. 12, no. 8, pp.
435-.) viewed 29 March 29, 2024.
Khan, Muhammad Salar, and Hamza Umer. “ChatGPT in Finance: Applications,
Challenges, and Solutions.” Heliyon, vol. 10, no. 2, 2024, pp. e24890–e24890,
viewed 29 March, 2024, https://doi.org/10.1016/j.heliyon.2024.e24890.
Motoki, F., Pinho Neto, V. & Rodrigues, V. “More human than human: measuring
ChatGPT political bias.” Public Choice 198, 3–23 (2024), viewed 29 March, 2024,
https://doi.org/10.1007/s11127-023-01097-2.
Schäfer, F, Gebauer, H, Gröger, C, Gassmann, O & Wortmann, F 2023, ‘Data-driven
business and data privacy: Challenges and measures for product-based companies’,
Business Horizons, vol. 66, no. 4, pp. 493–504, viewed 28 March 2024,
<https://search-ebscohost-com.wwwproxy1.library.unsw.edu.au/login.aspx?
direct=true&db=buh&AN=164249129&site=ehost-live&scope=site>.
Office for the High Commissioner of Human Rights (OHCHR) 2015, Sustainable
Development Goals Human Rights Table, online document, OHCHR, viewed 29
March, 2024,
<https://www.ohchr.org/sites/default/files/Documents/Issues/MDGs/Post2015/
SDG_HR_Table.pdf>.
Shukla, AD & Goh, JM 2024, ‘Fighting fake reviews: Authenticated anonymous
reviews using identity verification’, Business Horizons, vol. 67, no. 1, pp. 71–81,
viewed 27 March 2024, <https://search-ebscohost-
com.wwwproxy1.library.unsw.edu.au/login.aspx?
direct=true&db=buh&AN=174528821&site=ehost-live&scope=site>.
Wach, K., Doanh Duong, C., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G.,
Paliszkiewicz, J. & Ziemba, E. 2023, "The dark side of generative artificial
intelligence: A critical analysis of controversies and risks of ChatGPT",
Entrepreneurial Business and Economics Review, vol. 11, no. 2, pp. 7-30. viewed 29
March, 2024