Background (from Wiki)
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,
which proposed what is now called the Turing test as a criterion of intelligence. This criterion
depends on the ability of a computer program to impersonate a human in a real-time written
conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—
on the basis of the conversational content alone—between the program and a real human. The
notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program
ELIZA, published in 1966, which seemed to be able to fool users into believing that they were
conversing with a real human. However Weizenbaum himself did not claim that ELIZA was
genuinely intelligent, and the Introduction to his paper presented it more as a debunking exercise:
[In] artificial intelligence ... machines are made to behave in wondrous ways, often sufficient to
dazzle even the most experienced observer. But once a particular program is unmasked, once its
inner workings are explained ... its magic crumbles away; it stands revealed as a mere collection
of procedures ... The observer says to himself "I could have written that". With that thought he
moves the program in question from the shelf marked "intelligent", to that reserved for curios ...
The object of this paper is to cause just such a re-evaluation of the program about to be "explained".
Few programs ever needed it more.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition
of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-
programmed responses that can move the conversation forward in an apparently meaningful way
(e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE
ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the
processing involved has been merely superficial. ELIZA showed that such an illusion is
surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt
when conversational responses are capable of being interpreted as "intelligent".
Interface designers have come to appreciate that humans' readiness to interpret computer output as
genuinely conversational—even when it is actually based on rather simple pattern-matching—can
be exploited for useful purposes. Most people prefer to engage with programs that are human-like,
and this gives chatbot-style techniques a potentially useful role in interactive systems that need to
elicit information from users, as long as that information is relatively straightforward and falls into
predictable categories. Thus, for example, online help systems can usefully employ chatbot
techniques to identify the area of help that users require, potentially providing a "friendlier"
interface than a more formal search or menu system. This sort of usage holds the prospect of
moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked
"genuinely useful computational methods".From blog & online