First, a little terminology, for anyone who hasn't read or has only skimmed the essay. Gelernter uses the term "consciousness" to denote the possession of what philosophers call qualia. He's not talking about the differences between the brain states of waking and sleeping animals, and he's not talking about self-consciousness -- an animal's ability to recognize itself in a mirror, or to use the states of its own body (including its brain) as subjects for further cognition.
Qualia are (purportedly -- I'd like to think this post casts doubt on the very intelligibility of the idea) the felt character of experience. When my thermostat registers a certain drop in temperature, it throws on the heat. Similarly, when I register a certain drop in temperature, I throw on a sweater. But unlike the thermostat (the story goes), I feel cold. This feeling is not reducible to either the average kinetic energy of the air molecules around me or the physical act of putting on clothing: it's its own thing. On this picture, every human perception or sensation has an associated quale (the singular of qualia): the painfulness of pain, the redness of red things, the coldness of cold. To be conscious, in Gelernter's sense, is to have qualia.
Gelernter divides artificial-intelligence theorists into two camps: cognitivists and anticognitivists. Cognitivists believe that, if human beings have qualia (an important if!), then a robot that behaves exactly like a human being (even if its body is vinyl and its "brain" is a huge Rube Goldberg machine made of tinker toys) does, too.
Okay, so armed with these distinctions, let's take a look at a couple of Gelernter's initial claims:
(1) "This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious."
(2) "we know our fellow humans are conscious"Amazingly, these claims occur in immediate succession. How are we to reconcile them? Are human beings not "entities"? Let's assume they are. It follows that Gelernter is defending some form of "knowledge" that stands in no need of -- indeed, does not even admit of the possibility of -- objective justification.
What are we to do with claims to such knowledge? Are we under any obligation to take them seriously? Do they even require rebuttal? If they aren't anchored in any objective criteria at all, how could they be rebutted? Indeed, they can't. They can simply be denied.
And this is the position in which cognitivists and anticognitivists find themselves: simply denying each other's unfounded knowledge claims. The anticognitivist says, "We know our fellow humans are conscious." And the cognitivist says, "No we don't -- at least, not in any way that we don't also know that a perfect behavioral simulacrum of a human is conscious."
Gelernter refuses to acknowledge, however, that he and his disputants have reached such an impasse. He insists that the consciousness of his fellows is something he deduces. "We know our fellow humans are conscious," Gelernter says,
but how?...You know the person next to you is conscious because he is human. You're human, and you're conscious--which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.If there is an argument here, however, it is entirely circular: the sole criterion for ascribing consciousness to our fellow humans is -- they're human!
Gelernter then moves on to the Chinese room, which I discussed yesterday. After rehearsing Searle's argument, however, he adds that
we don't need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?The obvious cognitivist rejoinder, as I mentioned yesterday, is that neurons just relay electrical signals, faster or slower, and emit higher concentrations of this or that neurotransmitter. Everything brains accomplish is built out of these primitive operations. If consciousness can emerge from the accumulation of mechanistic neural processes, why can't it similarly emerge from the accumulation of mechanistic computational processes? Again, Gelernter responds by simply identifying consciousness and humanness, without any argumentative support:
Well, what does a computer do? It executes "machine instructions"--low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), "branches" (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions."
The fact is that the conscious mind emerges when we've collected many neurons together, not many doughnuts or low-level computer instructions.I.e., the sole criterion for ascribing consciousness to collections of neurons, rather than collections of logic gates, is -- they're neurons! QED.
If Gelernter were to read these posts and conclude that, in fact, his essay consisted entirely of non sequiturs and circular arguments, neither of which I think is likely, I would nonetheless expect him to maintain his anticognitivist stance. While cognitivist arguments can, I believe, show that anticognitivist arguments prove nothing, neither do they prove anything themselves. But as a Wittgensteinian pragmatist, I take this to show that the distinction between cognitivism and anticognitivism is meaningless. I agree with Gelernter's assertion that "there is no objective way to tell whether some entity is conscious", whether, ultimately, he himself does or not. And I think that the upshot is that the very idea of consciousness -- in his sense, consciousness as the possession of qualia -- is one on which we can get no intellectual purchase.