Thursday, June 28, 2007

Gelernter Wrapup

A few more remarks about David Gelernter's essay in Technology Review, which I hope won't run as long as the ones I made yesterday but probably will.

First, a little terminology, for anyone who hasn't read or has only skimmed the essay. Gelernter uses the term "consciousness" to denote the possession of what philosophers call qualia. He's not talking about the differences between the brain states of waking and sleeping animals, and he's not talking about self-consciousness -- an animal's ability to recognize itself in a mirror, or to use the states of its own body (including its brain) as subjects for further cognition.

Qualia are (purportedly -- I'd like to think this post casts doubt on the very intelligibility of the idea) the felt character of experience. When my thermostat registers a certain drop in temperature, it throws on the heat. Similarly, when I register a certain drop in temperature, I throw on a sweater. But unlike the thermostat (the story goes), I feel cold. This feeling is not reducible to either the average kinetic energy of the air molecules around me or the physical act of putting on clothing: it's its own thing. On this picture, every human perception or sensation has an associated quale (the singular of qualia): the painfulness of pain, the redness of red things, the coldness of cold. To be conscious, in Gelernter's sense, is to have qualia.

Gelernter divides artificial-intelligence theorists into two camps: cognitivists and anticognitivists. Cognitivists believe that, if human beings have qualia (an important if!), then a robot that behaves exactly like a human being (even if its body is vinyl and its "brain" is a huge Rube Goldberg machine made of tinker toys) does, too.

Okay, so armed with these distinctions, let's take a look at a couple of Gelernter's initial claims:
(1) "This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious."
(2) "we know our fellow humans are conscious"
Amazingly, these claims occur in immediate succession. How are we to reconcile them? Are human beings not "entities"? Let's assume they are. It follows that Gelernter is defending some form of "knowledge" that stands in no need of -- indeed, does not even admit of the possibility of -- objective justification.

What are we to do with claims to such knowledge? Are we under any obligation to take them seriously? Do they even require rebuttal? If they aren't anchored in any objective criteria at all, how could they be rebutted? Indeed, they can't. They can simply be denied.

And this is the position in which cognitivists and anticognitivists find themselves: simply denying each other's unfounded knowledge claims. The anticognitivist says, "We know our fellow humans are conscious." And the cognitivist says, "No we don't -- at least, not in any way that we don't also know that a perfect behavioral simulacrum of a human is conscious."

Gelernter refuses to acknowledge, however, that he and his disputants have reached such an impasse. He insists that the consciousness of his fellows is something he deduces. "We know our fellow humans are conscious," Gelernter says,
but how?...You know the person next to you is conscious because he is human. You're human, and you're conscious--which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.
If there is an argument here, however, it is entirely circular: the sole criterion for ascribing consciousness to our fellow humans is -- they're human!

Gelernter then moves on to the Chinese room, which I discussed yesterday. After rehearsing Searle's argument, however, he adds that
we don't need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?

Well, what does a computer do? It executes "machine instructions"--low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), "branches" (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions."
The obvious cognitivist rejoinder, as I mentioned yesterday, is that neurons just relay electrical signals, faster or slower, and emit higher concentrations of this or that neurotransmitter. Everything brains accomplish is built out of these primitive operations. If consciousness can emerge from the accumulation of mechanistic neural processes, why can't it similarly emerge from the accumulation of mechanistic computational processes? Again, Gelernter responds by simply identifying consciousness and humanness, without any argumentative support:
The fact is that the conscious mind emerges when we've collected many neurons together, not many doughnuts or low-level computer instructions.
I.e., the sole criterion for ascribing consciousness to collections of neurons, rather than collections of logic gates, is -- they're neurons! QED.

If Gelernter were to read these posts and conclude that, in fact, his essay consisted entirely of non sequiturs and circular arguments, neither of which I think is likely, I would nonetheless expect him to maintain his anticognitivist stance. While cognitivist arguments can, I believe, show that anticognitivist arguments prove nothing, neither do they prove anything themselves. But as a Wittgensteinian pragmatist, I take this to show that the distinction between cognitivism and anticognitivism is meaningless. I agree with Gelernter's assertion that "there is no objective way to tell whether some entity is conscious", whether, ultimately, he himself does or not. And I think that the upshot is that the very idea of consciousness -- in his sense, consciousness as the possession of qualia -- is one on which we can get no intellectual purchase.

Wednesday, June 27, 2007

Uplift the bytecode!

MIT's Technology Review magazine has published a long essay by Yale computer scientist David Gelernter that addresses some of the best-trodden arguments in the philosophy of mind with somewhat less aplomb than you might expect from a bright 11-year-old. This is mildly distressing to me, both because the central topic of the essay -- the possibility of conscious machines -- is one to which I've devoted a lot of time and energy and because in my day job, I'm a copy editor at Technology Review. So what follows may be treasonous. On the other hand, I've read the essay carefully, several times, so I'm intimately acquainted with all its flaws. (I should add that Gelernter appears to have been delightful to work with, and that for all I know, he's a brilliant computer scientist. But if he is, then his susceptibility to circular argument and non sequitur suggests that there may be more to the notion of philosophical training than we Wittgensteinians/Rortians tend to think there is.)

Gelernter appears to swallow whole what I'll call the Original Statement of John Searle's "Chinese room" thought experiment. The Original Statement should be distinguished from succeeding restatements because it, unlike them, is transparently fallacious. (I think that the restatements also fail to make the point Searle and others hope they will, but I agree with Rorty that their proponents and opponents beg all questions against each other. Or almost all.)

In the Original Statement, Searle asks us to imagine that someone has devised a computer program that can pass the "Turing test" in Chinese. That is, a native Chinese speaker typing questions and remarks into a computer and receiving replies generated by the program would be unable to tell whether or not she was actually instant-messaging another person. Now suppose that, instead of executing the program on a computer, Searle executes it by hand. He's locked in a room -- the Chinese room -- and sets of Chinese symbols are slid to him under the door. According to instructions in a thick manual, he correlates the symbols he receives with another set of Chinese symbols, which he slides back under the door -- the program's output.

Searle doesn't understand a word of Chinese; he's just lining up symbols with symbols (a process that may require a few pencil-and-paper calculations). And from this he concludes that the room doesn't understand Chinese either.

Now, I would have thought that the fallacy of that conclusion was obvious, but history has shown that it isn't. Who cares whether Searle can understand Chinese? He's just a small and not very important part of the system -- what Dan Dennett has called a "meat servo" -- and it's the system that understands Chinese.

Searle's role is analogous to that of the read/write head in the magnetic-tape memory of an old computer -- or perhaps the laser diode in the CD tray of a modern-day Dell. His job is just to fetch data and shuttle it where he's told to. Saying that the Chinese room can't understand Chinese because Searle can't is like saying that my computer can't play chess because the diode in the CD tray can't.

In the paper in which he proposed the Chinese-room thought experiment, Searle actually anticipated this objection (which might make you wonder why he bothered with the Original Statement at all), which he sensibly called the "systems response". I don't find his rejoinder to the systems response convincing, but for present purposes, that's irrelevant. Because Gelernter doesn't even get that far.

After declaring, "I believe that Searle's argument is absolutely right", Gelernter goes on to propose a thought experiment of his own, one that runs, in part, as follows:
Of course, we can't know literally what it's like to be a computer executing a long sequence of instructions. But we know what it's like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it's like to be a computer running a sophisticated AI application, sit down and sort cards all afternoon. That's what it's like.
Well, no, Dave, that's not what it's like. Again, that's what it's like to be the CPU. But the CPU, like Searle in the Chinese room, is just a small part of the system.

Gelernter's argument is analogous to saying, "The corpus callosum shuttles electrical signals between hemispheres of the brain. You want to know what it's like to be a corpus callosum? Well, imagine standing next to a computer with a USB thumb drive plugged into it. When the computer sounds an alert, you take the USB drive out and stick it in another computer. When that computer sounds an alert, you stick the USB drive back in the first computer. That's what it's like to be a corpus callosum. Therefore humans can never be conscious."

Notice that I am not here making the standard argument that neurons and neuronal processes, taken in isolation, are every bit as mechanistic as logic gates and binary operations. (I'll take that one up tomorrow.) Instead, I'm reproducing what we might call the synecdochal fallacy, common to both Searle and Gelernter, of substituting the part for the whole.

I'm sure that at this point I've taxed the patience of anyone who's not as much of a phil o' mind nerd as I am, so I'll stop for now. But tomorrow I'll address a couple of Gelernter's fallacious arguments that are all his own.

AMENDMENT (6/28/07, 5:20 p.m. ET):

A correspondent (who shall remain nameless) objects to the following line:
"He's just a small and not very important part of the system -- what Dan Dennett has called a 'meat servo' -- and it's the system that understands Chinese."
The objection is this:
"It's no good saying, 'The system understands,' because that's what's at issue."
It's a good point and may suggest that philosophy, which demands an incredibly high level of linguistic precision, should not be undertaken in blogs. But I plan on ignoring that suggestion, in the hope that my readers will read me with charity.

What I should have said, instead of "it's the system that understands Chinese", is
"It's the system's ability to understand Chinese that's in question."
The point was just that the Chinese-room thought experiment falls prey to the synecdochal fallacy. I didn't mean to imply that the refutation of the Chinese-room argument proves the possibility of conscious machines.

Tuesday, June 26, 2007

Title: Title

The name of this blog comes from a poem by Philip Larkin, the conclusion of which Virginia Heffernan reproduces here. As Ginny points out (does anyone call her Ginny? I don't know; I don't know her. But "Virginia" sounds too formal for a blogospheric cross reference, to say nothing of "Heffernan".), Richard Rorty made much of the phrase "blind impress" in his book Contingency, Irony, and Solidarity, which is why it lodged in my mind (although I had my own Larkin fixation before I started reading Rorty, thank you very much). Before my band, the Hopeful Monsters, fell apart recently, we had planned to cut a new album, and I'd been secretly scheming to call it Blind Impress. Maybe I'll still use that title if I ever make another CD, but in the meantime, thanks to my friend Tim, I guess I've found another way to use it as a personal slogan.

Rorty used "blind impress" to describe collections of his titular contingencies -- the biases, beliefs, obsessions, and convictions that a person acquires over a lifetime. His point in using the word "contingency" is that the forces that shape us are arbitrary and historically conditioned; he hoped the idea of a "blind impress" would replace that of an "intrinsic nature", just as the notion of a "historically conditioned bias" would replace that of "apprehension of ahistorical truth/virtue through the uniquely human faculty of reason".

So there are several reasons that I think Blind Impress makes a good title for my blog. The first, obvious one is that I'm going to be writing about, among other things, philosophy and literature, and I'm sympathetic to both Rorty's philosophical stance and Larkin's aesthetics. Another is that Rorty's notion of contingency spares me the trouble of trying to find something common to music, literature, film, and philosophy that lets me rope them off from the rest of culture -- from, say, painting and economics. There is no such common feature: these just happen to be the things I'm interested in. (Actually, I'm interested in painting and economics, too. I just don't feel I have the authority to address them. In the four areas I'm restricting myself to, I think I know what I'm talking about.)

Another reason is that I want to emphasize that, in making the aesthetic and philosophical judgments that I am surely going to make, I am aware that I'm simply indicating my own historically conditioned biases. (If I hadn't wanted to be thought smart, I probably wouldn't have fought my way through Ulysses for the first time; if I'd been better at sports in junior high, I probably wouldn't have placed so much value on being thought smart; etc.) That of course raises the question of why I would consider it worthwhile to attempt to broadcast those judgments in the first place. All I can say is, I've profited from engaging with other people's attempts to justify their own arbitrary biases, and I hope other people will profit from engaging with mine.

Finally, there's the reason that I wanted to use "Blind Impress" as an album title in the first place. This may not end up having a lot to do with this blog, but the songs that I've been writing for the last couple years adopt, I think, a slightly removed perspective on their subjects. They pull back a little from the immediate passions or contending systems of values that they describe and attempt to locate them in a larger ecosystem of contingencies. Some people may consider that a defect, but whatever. It's the stance that I've been historically conditioned to adopt.

Monday, June 25, 2007

Tim told me to create a blog

I'm just doing what I'm told.