Unit 6, Listening 1- What Kind of “Smart” is AI?
Bob Chesney: Welcome to The Bob Chesney Show. I’m Bob Chesney, and today we have two amazing
guests who follow the fast-changing world of artificial intelligence. Dr. Anna VanDyke is (1)
executive director of the Center for Machine–Mind Studies in Brussels, Belgium.
Dr. VanDyke, welcome.
Dr. Anna Vandyke: Thanks, Bob. Glad to be here.
Chesney: And Dr. Joseph Ngoma is senior research fellow in (2) cybernetics at Craymount
University in Canada. Dr. Ngoma?
Dr. Joseph Ngoma: Hello, Bob. Nice to be back.
Chesney: Let me start with you, Anna. So, are machines now as smart as we are?
Dr. Vandyke:That’s not fair, Bob! A tough question right at the start. But I’m going to go out on a (3)
limb and say . . . yes, in some ways, but no in other ways. It all depends on what we
1
mean by “smart,” doesn’t it?
Dr. Ngoma: It certainly does. In the old days, non-scientists used to be amazed that machines could
figure out (4) moon landings and store gazillions 2of phone numbers. But that doesn’t
seem like being smart anymore.
Bob Chesney: Why not? I can’t do it.
Dr. Ngoma: Neither can I. But is a gym locker smart? Is a calculator? You and I can choose how to use
those (5) equations or phone numbers wisely. We can reject any (6)
inappropriate uses of such things. Machines aren’t wise.
Chesney: Point taken3, but let me bring up some survey findings that surprised me. (7)
Apparently, a survey asked experts in AI to predict how soon machines could do
certain jobs better and more cheaply than humans. Here are some examples. Driving a
truck—10 years. So these people think truck driving will be taken over by machines really
soon. Writing a best-selling novel, 30 years. (8)Performing surgery, 40 years.
Really? Risky medical procedures done by machines?
Dr. Vandyke: My team in Brussels is very familiar with those predictions, Bob. But let’s remember
predictions came from. They were experts in AI who were at (10)
who the (9)
conferences with other experts in AI. There was probably a lot of bias in favor of AI in that
crowd. But it is pretty obvious that AI will take over a lot of jobs. Most jobs are largely routine4. You have
a certain set of tasks and certain (11) procedures for performing them.
Chesney: But writing a novel? Surgery?
Dr. Vandyke:You’d be surprised at how routine even those tasks can be. Remember that most (12)
1
go out on a limb: verb phrase do something that could be dangerous
2
gazillions: noun an unspecified very large number
3
point taken: noun phrase an expression that means I understand what you mean
4
routine: adjective made up of actions that are done over and over in a regular way
laser eye surgery now is automated. When those (13) surgeries first
appeared 30 years ago, people thought, “What? Risk my eyes in automated surgery? No
way.” Well, here we are. Of course, very (14) powerful machines with enough
information can do things that even seem creative to us. Those automated novels of the
future may not be just boring, (15) predictable junk. Still, I know what you mean
about surgeries. Some are full of surprises and a human will probably have to be standing
by. It can be high-risk, of course.
Chesney: Well, what about having (16) highways full of automated trucks? Talk about risk.
Dr. Ngoma: I have to tell you, Bob, I am a very careful person. But I think AI is completely smart enough
for driving trucks—or it soon will be. Ninety percent of truck driving is on long,
predictable highways. It’s the first mile and the last one that are the problems. Humans still
have to do those bits.
Chesney: Why? Aren’t the AI systems clever enough for the first and last mile?
Dr. Ngoma: Well, not yet anyway. Too many unpredictable things happen on city streets.
Chesney: Anna, I thought AI learned really fast. Why can’t it learn to drive a truck for one or two
difficult miles, as Joseph brought up?
Dr. Vandyke:Well, let’s remember how a robot 5like an AI truck knows where to go. It uses two main
layers of information. The first layer comes from a GPS system—a set of stored maps and a
signal that locates the vehicle on those maps. The second layer comes from a set of (17)
sensors—little devices kind of like cameras that “read” the details of its (18)
environment. Problems mostly involve the sensors. They pick up too many
unfamiliar things in their environment. They can get confused by (19) weird
lighting and shadows, coatings of ice, blowing dust, or people or animals behaving in
unexpected ways—countless other things. Humans are still way better at (20)
responding to messy surroundings.
Chesney Switching topics a bit, I think a lot of people are afraid that hackers 6can take over AI
systems. And not just that someone might take over a truck’s controls. Think of everything
else that is under AI control—our electric power system, the city water systems, the (21)
banking system, air traffic—wow!
Dr. Ngoma: I won’t say those worries are silly, Bob. If bad guys did take over one of those systems, we
could have serious trouble. But the good news is that many very smart AI experts
specialize in (22) security, and they are dedicated professionals.
Chesney: Again, it comes down to having smart humans, doesn’t it?
5
robot: noun a machine that can do some of the tasks humans usually do
6
hacker: noun a person who uses a computer to look at and/or change information on
another computer without permission
Unit 6, Listening 2, Activities A and B Page 134, 135
Asking the Right Questions about AI
Teacher : So, today let’s go further into last week’s topic, “Asking the Right Questions about (1)
Artificial Intelligence—AI.” Can anyone summarize? Din?
Male 1: Mostly, we shouldn’t really ask whether AI is smarter than humans. The answer is always
going to be “in some ways, yes, and other ways, no.”
Teacher: Right. It’s kind of like asking, “Which tool is better, a hammer or a (2)
screwdriver?” It depends. Better for what—or smarter at what?
Female 2: And we started discussing, “Why do people always want to compare robots—or AI or
whatever—to humans?”
Teacher: Yes, and that’s a good place to start today. Let me show you two pictures. Here’s the first
one. What do you see?
Female 2: Uh . . . it’s obviously meant to look like a woman, but it has metal “hair.” Is she on a talk
show?
Teacher: Yes. This is a robot called Sophia, built by a Hong Kong company, Hanson Robotics. She . . .
um, it . . . is a celebrity and has been interviewed on late-night talk shows. It is the first AI
to be (3) declared a citizen of a real country. Saudi Arabia granted her . . . it . . .
citizenship in 2017.
Male 1: It is really hard to say “it” instead of “she,” isn’t it?
Female 2: What does . . . it do?
Teacher: Sophia mostly does publicity tours. Its maker, David Hanson, says it is a prototype7 of a
“social robot.” They’re designed to be companions for sick or (4) elderly people—
eventually. They might also do simple service jobs, like checking people out at stores, or
even crowd control at (5) concerts and political events.
Female 2: So the goal is for them to seem human, and be pleasant, for a short time.
Teacher: True. The AI-to-human comparison is important in this case. OK, next picture.
Male 2: One of those robots that explore Mars. A rover.
Teacher: Yes, this is the Mars rover called “Curiosity.” It is exploring a rocky landscape on Mars. It
looks like a metal (6) insect of some type. No one has tried to make this one look
human.
Male 2: There’s no reason for it to look human. It never deals with humans. It collects samples all
alone on Mars and sends data to computers on Earth.
Teacher: So, which is smarter, Curiosity or Sophia?
Male 1: I thought we weren’t supposed to ask that!
Male 2: Well, Sophia is just for show. She, or it, or whatever, just smiles and talks. The rover can do
more real work.
7
prototype: noun an early model of a machine that shows what later models could do
Female 2: But Sophia has conversations, right? It’s hard to program a robot to do that.
Teacher: I’m not taking sides here. Just a few facts. Curiosity landed on Mars in 2012. NASA
scientists said that it had less computing power than the (7) average cell phone of
that year. So they don’t consider it a (8)genius —no super-smart, chess-playing
computer—but to them that’s OK. To NASA, Curiosity needs toughness more than smarts.
It traveled 350 million miles, landed on Mars, and has spent years out in the open, through
extreme temperatures and space radiation and (9) sandstorms, collecting
samples and analyzing them.
Male 2: What about Sophia? I’m assuming she’s smarter than a cell phone.
Teacher: Yes. Katya was right. Conversation requires a lot of intelligence. The voice feature on your
phone, something like Siri or Alexa, only has to (10) handle speech. A totally
successful social robot would have to keep up with a fast stream of complicated
information—speech, facial expressions, (11) gestures—even just to make small
talk8. It’s not surprising that Sophia makes a lot of mistakes.
Male1: Oh, yeah. I heard she said she wants to destroy humans.
Teacher: Yes. Her maker, David Hanson, was joking around in an interview and asked, “Do you want
to destroy humans? Please say no,” but before he could finish, Sophia said, “OK. I will
destroy humans.”
Female 2: So Sophia hasn’t been trained well enough, or can’t learn fast enough, or something. But
maybe future (12) versions will be smarter. That is getting close to one kind of
human intelligence.
Male 2: A couple of weeks ago, we talked about the Turing Test. Could Sophia pass the Turing Test?
Teacher: It’s important to remember that the Turing Test was developed because Alan Turing (13)
considered it impossible to answer the question “Can machines think?” Like us in
this class, he found the question too muddled up by different definitions of words. So he
developed more narrow criteria: Can an AI device (14) convince a human
communicating with it that it is human? The human can’t see it or hear it. They
communicate by written messages. Unless Sophia goes through an actual Turing Test, we
cannot know whether it would pass. But what are your guesses?
Female 2: No.
Male1: No.
Male 2: No. They tried to make Sophia look and sound sort of human, but that doesn’t matter in a
Turing Test. It’s all about the content of communication. It sounds like she gets the content
(15) messed up. Oops, I said “she.”
Teacher: I agree with you guys. No for Sophia. And Curiosity doesn’t communicate at all, except in
(16) digital code . But it’s interesting to note what question Turing chose to ask
about intelligence: Does the machine seem human? Hmmm. Something to think about.
8
make small talk: verb phrase to have a short conversation about unimportant topics