In my recent short story, “The Gordian Paradox,” a human attempts to defeat an evil artificial intelligence with a logical paradox: “This sentence is false.” However, instead of getting the AI stuck in a loop, the evil AI and the good AI start arguing about the meaning of the paradox.
I realize this logic may not have made a whole lot of sense, especially as presented in the story, so I wanted to shed a bit more light on it.
Defeating an AI with a paradox (often with explosive results) is a common trope in science fiction, but philosophically, it’s not really new. What Raven and Goliath are arguing about, of course, is the classic Liar Paradox, which is basically any statement that contradicts its own truthfulness, like “This sentence is false.”
This trope was popularized in the 60s back when real computers were too simple (having very limited memory) to guard against these kinds of paradoxes, not to mention they weren’t connected to a network where anyone malicious could get to them. Modern computers are vulnerable to hacking, but they can generally handle paradoxes like this. Since the 1990s, multi-threading has allowed most computers to do multiple things at once, so even if one process locks up, the rest can keep going. Sometimes, they can detect a process getting caught in a loop, but even they can’t, the user can spot it and kill the process, and the rest of the programs aren’t disrupted.
Also, real computers will never literally melt down or explode unless they’re broken or mistreated at the hardware level, because heat output is determined by the clock speed and how good the cooling system is (which is of course why Raven attacked that first).
Maybe the most famous pure example of this trope is in the Star Trek original series episode “I, Mudd,” where Captain Kirk defeats the android Norman by feeding him a Liar Paradox.
Kirk: He lied. Everything Harry tells you is a lie. Remember that. Everything Harry tells you is a lie.
Harvey Mudd: Listen to this carefully, Norman. I am lying.
Interestingly, the pure example of this trope seems to be uncommon, more often coming up in comedic or parody settings, or else subverted in some way. In Star Trek, Spock did use the trick in “Wolf in the Fold,” by telling the computer to compute the exact value of pi. However, it was much more common for Kirk to convince a rogue AI that it had violated its own core programming. For example, in “The Ultimate Computer,” Kirk convinces the M-5 computer, which was programmed to protect life, that it is a murderer, causing it to shut down.
The earliest example of this trope may be Isaac Asimov’s 1941 short story “Liar!” This was written when computers were still being invented, so it wasn’t clear what they were and weren’t capable of doing. Yet even here, the paradox is related to the robot’s core programming, not a pure logical contradiction. Susan Calvin convinces the robot that the First Law of Robotics is self-contradictory, and every course of action will result in harm to humans, and it also shuts down.
(In Asimov’s “Robbie,” written one year earlier, Gloria breaks the Talking Robot with a logical error, but that wasn’t really a paradox, but simply a question outside the robot’s programming.)
Doctor Who also did it a few times, most interestingly in “The Green Death,” in which the Doctor plays the Liar Paradox straight on the BOSS computer. Unlike most instances, BOSS is actually smart enough to see through the paradox—or at worst, it’s only confused for a few moments until something happens to break the loop. At best, it’s actually smart enough to play for time by pretending to be confused.
In 2001: A Space Odyssey, HAL 9000 isn’t fed a paradox, but is given conflicting orders to be honest with his crew and to conceal classified information from them. HAL finds his own way of “cutting the Gordian knot,” and resolves the paradox by killing his crew.
In my version, of course, Goliath just bypasses the paradox and says, “Um, true. I’ll go with true. There, that was easy.”
To which Raven responds, “What are you talking about? It’s obviously false.”
What’s going on, here? Of course, the Liar Paradox is not a novelty, and it wasn’t even when Star Trek used it. It’s a well-known problem in logic, and plenty of real mathematicians and philosophers have tackled it. The thing is, they can’t agree on what it means, and that’s reflected in Raven’s and Goliath’s argument.
And it’s so often presented as a novelty (to the AIs) that I didn’t even think of that until I was halfway through the story. I actually worked out most of Raven’s side of the conversation before I remembered that this even was the Liar Paradox and that other people had looked at it. Luckily, I was more or less in line with the literature.
Raven’s first argument, that the Liar Paradox is false, comes straight from New Zealander philosopher Arthur Prior, and as she said, this is sort of the obvious answer. “This sentence is false” is self-contradictory, but in mathematical proofs, a contradiction is usually proof that something is false. Even though the literal meaning of the sentence is still a paradox, you might say that it’s false on a “higher level,” if that helps you understand it.
Raven’s second argument, that the Liar Paradox is not well-formed, I thought of by analogy with Russell’s paradox (which would need its own post to fully explain). However, it’s similar to the work of Alfred Tarski (of Banach-Tarski paradox fame) and American philosopher Saul Kripke. They go into a lot more complicated math, but the bottom line is that a self-contradictory statement can’t be assigned a truth value. This is sort of the next most obvious answer. If a statement is self-contradictory, the simplest thing to do is to throw it out as “not good logic.” It makes no more sense than asking what color a smell is (unless you have synesthesia).
Raven’s third argument, that the Liar Paradox is well-formed, but is undecidable, is similar to the work of Canadian philosopher Andrew Irvine. This is nearly the same idea, except that it says the Liar Paradox is still “good logic,” but it still can’t be resolved as true or false. This seems paradoxical in itself, but Kurt Gödel showed that this is a fundamental property of math. His First Incompleteness Theorem basically says that in any self-consistent system of logic, there will be statements that are “good logic,” but can’t be proved either true or false. (Look up things like the continuum hypothesis for more info).
Goliath’s rebuttal (after first claimed that the Liar Paradox is true) is that it is true and false at the same time. I took this directly from Australian Philosopher Graham Priest. In his system, things don’t have to be strictly true or false. This is called “dialethism” and is part of the field of “paraconsistent logic”—that is, logic on the boundary between consistent and inconsistent. Things get weird when you walk down this path. It doesn’t quite “work” in the usual sense, but it doesn’t “not work” in the usual sense, either.
Contrary to what Raven said, there may indeed be a basis for Goliath to claim that the Liar Paradox is just plain true. One interesting result that didn’t fit into the story is that of Jon Barwise and John Etchemendy, who argue that the Liar Paradox can be split into two different statements. They claim that “This statement is not true,” is false, while, “It is not the case that this statement is true,” is true. This is also paraconsistent logic, in this case, non-well-founded set theory. But frankly, I can’t see the distinction.
Regardless, as Raven said, she’s the AI with the gun, and she and Dave win out.