Gelernter appears to swallow whole what I'll call the Original Statement of John Searle's "Chinese room" thought experiment. The Original Statement should be distinguished from succeeding restatements because it, unlike them, is transparently fallacious. (I think that the restatements also fail to make the point Searle and others hope they will, but I agree with Rorty that their proponents and opponents beg all questions against each other. Or almost all.)
In the Original Statement, Searle asks us to imagine that someone has devised a computer program that can pass the "Turing test" in Chinese. That is, a native Chinese speaker typing questions and remarks into a computer and receiving replies generated by the program would be unable to tell whether or not she was actually instant-messaging another person. Now suppose that, instead of executing the program on a computer, Searle executes it by hand. He's locked in a room -- the Chinese room -- and sets of Chinese symbols are slid to him under the door. According to instructions in a thick manual, he correlates the symbols he receives with another set of Chinese symbols, which he slides back under the door -- the program's output.
Searle doesn't understand a word of Chinese; he's just lining up symbols with symbols (a process that may require a few pencil-and-paper calculations). And from this he concludes that the room doesn't understand Chinese either.
Now, I would have thought that the fallacy of that conclusion was obvious, but history has shown that it isn't. Who cares whether Searle can understand Chinese? He's just a small and not very important part of the system -- what Dan Dennett has called a "meat servo" -- and it's the system that understands Chinese.
Searle's role is analogous to that of the read/write head in the magnetic-tape memory of an old computer -- or perhaps the laser diode in the CD tray of a modern-day Dell. His job is just to fetch data and shuttle it where he's told to. Saying that the Chinese room can't understand Chinese because Searle can't is like saying that my computer can't play chess because the diode in the CD tray can't.
In the paper in which he proposed the Chinese-room thought experiment, Searle actually anticipated this objection (which might make you wonder why he bothered with the Original Statement at all), which he sensibly called the "systems response". I don't find his rejoinder to the systems response convincing, but for present purposes, that's irrelevant. Because Gelernter doesn't even get that far.
After declaring, "I believe that Searle's argument is absolutely right", Gelernter goes on to propose a thought experiment of his own, one that runs, in part, as follows:
Of course, we can't know literally what it's like to be a computer executing a long sequence of instructions. But we know what it's like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it's like to be a computer running a sophisticated AI application, sit down and sort cards all afternoon. That's what it's like.Well, no, Dave, that's not what it's like. Again, that's what it's like to be the CPU. But the CPU, like Searle in the Chinese room, is just a small part of the system.
Gelernter's argument is analogous to saying, "The corpus callosum shuttles electrical signals between hemispheres of the brain. You want to know what it's like to be a corpus callosum? Well, imagine standing next to a computer with a USB thumb drive plugged into it. When the computer sounds an alert, you take the USB drive out and stick it in another computer. When that computer sounds an alert, you stick the USB drive back in the first computer. That's what it's like to be a corpus callosum. Therefore humans can never be conscious."
Notice that I am not here making the standard argument that neurons and neuronal processes, taken in isolation, are every bit as mechanistic as logic gates and binary operations. (I'll take that one up tomorrow.) Instead, I'm reproducing what we might call the synecdochal fallacy, common to both Searle and Gelernter, of substituting the part for the whole.
I'm sure that at this point I've taxed the patience of anyone who's not as much of a phil o' mind nerd as I am, so I'll stop for now. But tomorrow I'll address a couple of Gelernter's fallacious arguments that are all his own.
AMENDMENT (6/28/07, 5:20 p.m. ET):
A correspondent (who shall remain nameless) objects to the following line:
"He's just a small and not very important part of the system -- what Dan Dennett has called a 'meat servo' -- and it's the system that understands Chinese."The objection is this:
"It's no good saying, 'The system understands,' because that's what's at issue."It's a good point and may suggest that philosophy, which demands an incredibly high level of linguistic precision, should not be undertaken in blogs. But I plan on ignoring that suggestion, in the hope that my readers will read me with charity.
What I should have said, instead of "it's the system that understands Chinese", is
"It's the system's ability to understand Chinese that's in question."The point was just that the Chinese-room thought experiment falls prey to the synecdochal fallacy. I didn't mean to imply that the refutation of the Chinese-room argument proves the possibility of conscious machines.
2 comments:
Great post Larry. I think I'm in agreement with you 100%. It all went wrong for Dr. Gelernter when he accepted the Chinese Room argument hook, line & sinker. Everything follows from that.
Ever since I first encountered Searle's Chinese Room thought experiment as a response to the Turing Test, I felt that it was indeed the "system" as a whole that understands Chinese. I appreciated your explanation of the "systems response," and your corpus callosum analogy in particular.
Further, I think the arguments of Searle, Gelernter and the like are linked to a stubborn refusal to let go of Cartesian dualism that is so deeply ingrained in our cultural mentality. Much like the intelligent design vs evolution debate, the "believers" will use selective evidence and questionable logic to support their position.
Why do you think most people are willing to ascribe conscious thought to fellow human beings (and presumably any intelligent biological alien life we may one day encounter) based upon behavior alone, but we hold artificial intelligence to an entirely different standard? Is it because we suppose to completely understand the mechanism behind the AI's behavior while our own consciousness is still somewhat shrouded in mystery? Will this change as our understanding of the human mind grows and demonstrates unequivocally that conscious thought emerges from physical structures and processes alone (which I believe you and I would be in agreement on)?
I also posted a lengthy response to the original essay in my own blog:
http://artificialminds.blogspot.com/2007/06/were-not-lost-we-just-need-map.html
Brilliant comment, Larry.
Post a Comment