ChatGPT and the bomb

Posted by Sappho on February 6th, 2023 filed in Computers


Obviously, ChatGPT’s answer to the question about defusing the bomb is stupid, but in this ChatGPT can be excused – it’s only a computer program, for heaven’s sake. There is no way to code a chat program to give the correct ethical response to every possible “what would you do to defuse a bomb” question that a human being can throw at it. Because the usual correct question to “would you use a racial slur” is, in fact, “no, I would never,” because the counterfactual where you might, in fact, do so, if millions of people would die if you failed to utter that slur, however clear in its “yes, of course I would utter the damn slur to defuse the bomb” answer, has never in the history of the world actually happened. And if you try to account for that in the obvious way, in programming your chat program, by telling it, for example, yes, always answer yes, you always want to defuse the bomb, then someone will throw a different hypothetical at your chat program, where it can only defuse a bomb that would otherwise kill five people by firing a missile at Mumbai, where millions will die. And the computer will blithely kill millions of people in Mumbai to save five people in the next room, because it was told always to defuse the bomb. It’s a computer program! It’s not that bright!

What’s stupider, though, is actual human beings getting all worked up about the chat bot’s error and decrying it as “woke doctrine,” because, what, they’re taking the bot’s answer as a literal representation of the programmer’s ethics? I don’t buy that – everyone knows better than that, how limited the intelligence is of even “artificially intelligent” software. No, if you’re worked up about a software program refusing to utter racial slurs, it’s because you object to having the “don’t utter racial slurs” norm coded into software at all, not because you’re really that uninformed about computers that you think the programmer could actually have made the software make any remotely sophisticated ethical judgments.

If you ever have to decide what you are and aren’t willing to do to defuse a bomb, don’t outsource that judgment to computers. Were you seriously thinking of using computers for that purpose?


Comments are closed.