0% found this document useful (0 votes)
30 views3 pages

ChatGPT Part 7

ChatGPT is a generative AI model that simulates human-like conversation but is limited to five responses per interaction to avoid errors and offensive content. It learns from patterns in large datasets but does not think like humans, resulting in both accurate and erroneous outputs. There are ongoing legal and ethical concerns regarding copyright, liability, and data privacy related to AI-generated content, prompting the need for responsible AI guidelines and standards.

Uploaded by

sandeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views3 pages

ChatGPT Part 7

ChatGPT is a generative AI model that simulates human-like conversation but is limited to five responses per interaction to avoid errors and offensive content. It learns from patterns in large datasets but does not think like humans, resulting in both accurate and erroneous outputs. There are ongoing legal and ethical concerns regarding copyright, liability, and data privacy related to AI-generated content, prompting the need for responsible AI guidelines and standards.

Uploaded by

sandeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

same language you’re using.

It continues building on the conver-


sation as your interactions with it proceed. This threaded interac-
tion appears as a real-time dialog and creates the semblance of a
­conversation or a highly intelligent response to your request.

However, the number of ChatGPT responses you can get in a ­single


conversation may need to be limited to prevent this AI model from
providing weird responses, making errors, or becoming offensive.
To prevent this behavior, Microsoft limited ChatGPT in Bing to
five responses per user conversation. You’re free to start another
conversation, but the current exchange can’t continue past the
capped limit.

ChatGPT generates rather than regurgitates content, which means


it can make erroneous assumptions, lie, and hallucinate. ChatGPT
or any other generative AI model is not an infallible source of
truth, a trustworthy narrator, or an authority on any topic, even
when you prompt it to behave like one. In some circumstances,
accepting it as an oracle or a single source of truth is a grave error.

Understanding What ChatGPT Is and Isn’t


The capability to produce a close semblance to human commu-
nication is primarily responsible for that skin-prickling feeling
commonly referred to as the heebie-jeebies. ChatGPT sounds and
acts almost too human.

The interaction between users and ChatGPT has a different feel


than that previously experienced with other software. For one,
software using earlier iterations of natural-language process-
­
ing is generally limited to short exchanges and predetermined
responses. ChatGPT can generate its own content and continue a
dialog for much longer.

ChatGPT, like all machine-learning (ML) and deep-learning (DL)


models, “learns” by exposure to patterns in massive training
datasets that it then uses to recognize these and similar patterns
in other datasets. ChatGPT does not think or learn like humans do.
Rather, it understands and acts based on its pattern recognition
capabilities.

ChatGPT supports 95 languages as of this writing. It also knows


several programming languages, such as Python and JavaScript.

CHAPTER 1 Introducing ChatGPT 9


Generative AI also differs from programmed software because
it can consider context as well as content in natural-language-
based prompts.

Chat in ChatGPT’s name is a reference to its use of natural-­


language processing and natural-language generation. GPT stands
for generative pretrained transformer, which is a deep learn-
ing neural network model developed by OpenAI, an ­American AI
research and development company. You can think of GPT as the
secret sauce that makes ChatGPT work like it does.

ChatGPT does not think like humans do. It predicts, based on pat-
terns it has learned, and responds accordingly with its informed
guesses and prediction of preferred or acceptable word order. This
is why the content it generates can be amazingly brilliant or woe-
fully wrong. The magic, when ChatGPT is correct, comes from the
accuracy of its predictions. Sometimes ChatGPT’s digital crystal
ball is right and sometimes not. Sometimes it delivers truth, and
sometimes it spews something more vile.

Unwrapping ChatGPT fears


Perhaps no other technology is as intriguing and disturbing as
generative artificial intelligence. Emotions were raised to a fever
pitch when 100 million monthly active users snatched up the free
research preview version of ChatGPT within two months after
its launch. You can thank science fiction writers and your own
imagination for both the tantalizing and terrifying triggers that
­ChatGPT is now activating in your head.

But that’s not to say that there are no legitimate reasons for
­caution and concern. Lawsuits have been launched against gen-
erative AI programs for copyright and other intellectual prop-
erty infringements. OpenAI and other AI companies and partners
stand accused of illegally using copyrighted photos, text, and
other intellectual property without permission or payment to
train their AI models. These charges generally spring from copy-
righted content getting caught up in the scraping of the internet
to create massive training datasets.

In general, legal defense teams are arguing about the inevitability


and unsustainability of such charges in the age of AI and request-
ing that charges be dropped. The lawsuits regarding who owns
the content generated by ChatGPT and its ilk lurk somewhere in
the future. However, the US Copyright Office has already ruled

10 ChatGPT For Dummies


that AI-generated content, be it writing, images, or music, is not
­protected by copyright law. In the US, at least for now, the gov-
ernment will not protect anything generated by AI in terms of
rights, licensing, or payment.

Meanwhile, realistic concerns exist over other types of potential


liabilities. ChatGPT and its kind are known to sometimes deliver
incorrect information to users and other machines. Who is liable
when things go wrong, particularly in a life-threatening scenario?
Even if a business’s bottom line is at stake and not someone’s life,
risks can run high and the outcome can be disastrous. Inevitably,
someone will suffer and likely some person or organization will
eventually be held accountable for it.

Then there are the magnifications of earlier concerns such as


data privacy, biases, unfair treatment of individuals and groups
through AI actions, identity theft, deep fakes, security issues, and
reality apathy, which is when the public can no longer tell what is
true and what isn’t and thinks the effort to sort it all out is too
difficult to pursue.

In short, ChatGPT accelerates and intensifies the need for the


rules and standards currently being studied, pursued, and devel-
oped by organizations and governments seeking to establish
guardrails aimed at ensuring responsible AI. The big question is
whether they’ll succeed in time, given ChatGPT’s incredibly fast
adoption rate worldwide.

Examples of groups working on guidelines, ethics, standards, and


responsible AI frameworks include the following:

»» ACM US Technology Committee’s Subcommittee on AI &


Algorithms
»» World Economic Forum
»» UK’s Centre for Data Ethics
»» Government agencies and efforts such as the US AI Bill of
Rights and the European Council of the European Union’s
Artificial Intelligence Act.
»» IEEE and its 7000 series of standards
»» Universities such as New York University’s Stern School of
Business
»» The private sector, wherein companies make their own
responsible AI policies and foundations

CHAPTER 1 Introducing ChatGPT 11

You might also like