Thursday, June 28, 2007

Gelernter Wrapup

A few more remarks about David Gelernter's essay in Technology Review, which I hope won't run as long as the ones I made yesterday but probably will.

First, a little terminology, for anyone who hasn't read or has only skimmed the essay. Gelernter uses the term "consciousness" to denote the possession of what philosophers call qualia. He's not talking about the differences between the brain states of waking and sleeping animals, and he's not talking about self-consciousness -- an animal's ability to recognize itself in a mirror, or to use the states of its own body (including its brain) as subjects for further cognition.

Qualia are (purportedly -- I'd like to think this post casts doubt on the very intelligibility of the idea) the felt character of experience. When my thermostat registers a certain drop in temperature, it throws on the heat. Similarly, when I register a certain drop in temperature, I throw on a sweater. But unlike the thermostat (the story goes), I feel cold. This feeling is not reducible to either the average kinetic energy of the air molecules around me or the physical act of putting on clothing: it's its own thing. On this picture, every human perception or sensation has an associated quale (the singular of qualia): the painfulness of pain, the redness of red things, the coldness of cold. To be conscious, in Gelernter's sense, is to have qualia.

Gelernter divides artificial-intelligence theorists into two camps: cognitivists and anticognitivists. Cognitivists believe that, if human beings have qualia (an important if!), then a robot that behaves exactly like a human being (even if its body is vinyl and its "brain" is a huge Rube Goldberg machine made of tinker toys) does, too.

Okay, so armed with these distinctions, let's take a look at a couple of Gelernter's initial claims:
(1) "This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious."
(2) "we know our fellow humans are conscious"
Amazingly, these claims occur in immediate succession. How are we to reconcile them? Are human beings not "entities"? Let's assume they are. It follows that Gelernter is defending some form of "knowledge" that stands in no need of -- indeed, does not even admit of the possibility of -- objective justification.

What are we to do with claims to such knowledge? Are we under any obligation to take them seriously? Do they even require rebuttal? If they aren't anchored in any objective criteria at all, how could they be rebutted? Indeed, they can't. They can simply be denied.

And this is the position in which cognitivists and anticognitivists find themselves: simply denying each other's unfounded knowledge claims. The anticognitivist says, "We know our fellow humans are conscious." And the cognitivist says, "No we don't -- at least, not in any way that we don't also know that a perfect behavioral simulacrum of a human is conscious."

Gelernter refuses to acknowledge, however, that he and his disputants have reached such an impasse. He insists that the consciousness of his fellows is something he deduces. "We know our fellow humans are conscious," Gelernter says,
but how?...You know the person next to you is conscious because he is human. You're human, and you're conscious--which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.
If there is an argument here, however, it is entirely circular: the sole criterion for ascribing consciousness to our fellow humans is -- they're human!

Gelernter then moves on to the Chinese room, which I discussed yesterday. After rehearsing Searle's argument, however, he adds that
we don't need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?

Well, what does a computer do? It executes "machine instructions"--low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), "branches" (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions."
The obvious cognitivist rejoinder, as I mentioned yesterday, is that neurons just relay electrical signals, faster or slower, and emit higher concentrations of this or that neurotransmitter. Everything brains accomplish is built out of these primitive operations. If consciousness can emerge from the accumulation of mechanistic neural processes, why can't it similarly emerge from the accumulation of mechanistic computational processes? Again, Gelernter responds by simply identifying consciousness and humanness, without any argumentative support:
The fact is that the conscious mind emerges when we've collected many neurons together, not many doughnuts or low-level computer instructions.
I.e., the sole criterion for ascribing consciousness to collections of neurons, rather than collections of logic gates, is -- they're neurons! QED.

If Gelernter were to read these posts and conclude that, in fact, his essay consisted entirely of non sequiturs and circular arguments, neither of which I think is likely, I would nonetheless expect him to maintain his anticognitivist stance. While cognitivist arguments can, I believe, show that anticognitivist arguments prove nothing, neither do they prove anything themselves. But as a Wittgensteinian pragmatist, I take this to show that the distinction between cognitivism and anticognitivism is meaningless. I agree with Gelernter's assertion that "there is no objective way to tell whether some entity is conscious", whether, ultimately, he himself does or not. And I think that the upshot is that the very idea of consciousness -- in his sense, consciousness as the possession of qualia -- is one on which we can get no intellectual purchase.

2 comments:

Charlie said...

Couldn't we also say that consciousness is the awareness that one is aware of one's self and the world as a whole? I know that sounds recursive and circular...in other words, I am aware that I am aware of myself and also aware of my internal model of the outside world.

Couldn't this then be recreated in an artificial intelligence that maintains and internal model of the world, and that this model includes a perception of the AI itself creating that model? Of course, then you could get into a situation where you're creating models of models, ad infinitum, but I think you really only need two steps here...and it is not actually the "self" but the internal model of the "self" that is the conscious entity--the one has experiences or qualia. Owen Holland explored this concept in a presentation called "Machine Consciousness and Creativity," which I covered quite some time ago in my blog: http://artificialminds.blogspot.com/2005/07/from-deep-blue-to-deep-thoughts.html

Again, in regards to the discrepancy between Gelernter's methods for determining consciousness in beings and his double-standard for humans, I would make the same points that I did in my comment on your previous post...that this is the result of entrenched dualism, a mystical notion of consciousness being a product of some sort of "spirit," and everyday anthropocentric hypocrisy.

I think your point about the inherent inability to rebut arguments that aren't anchored in objective criteria is well-made, and can be extended to many other fields of debate. There's no use arguing with someone who isn't playing by the same rules of logic.

Larry said...

Hey Charlie,

Thanks for the encouraging comments -- they're stacking up faster than I can respond! Time permitting, I'll reply to both of them here.

Consciousness in the sense in which you just defined it -- let's call it consciousness1 -- is, I think, exactly what cognitive scientists and you and your fellow AI researchers should be investigating. But that's because consciousness1 is tractable to empirical investigation. We could experimentally determine whether, in fact, the brain instantiates a hierarchy of models of the kind you've described, how far that hierarchy extends (Is two steps enough? Can you make do with one?), whether the disruption of one of those models similarly disrupts the capacity for the kinds of first-person judgments and verbal reports that we associate with consciousness, and if so, which model that is, etc., etc. And it's precisely because consciousness1 is something that we can, in principle, investigate scientifically that I took such pains at the beginning of my second post -- dragging in unwelcome philosophical jargon like "qualia" -- to distinguish it from consciousness in Gelernter's sense -- consciousness2. I think it's very important to preserve that distinction. The problem of consciousness1 is one that AI researchers, neurologists, and cognitive scientists are making progress on every day, and it's one that philosophers, with very few exceptions, are ill equipped to talk about in any detail. The problem of consciousness2 -- qualia -- is what the philosopher David Chalmers has rather philocentrically called the "hard problem of consciousness", and it's one on which philosophers have basically made no progress in 400 years.

My own hunch is that we'll ultimately find that different uses of the word "consciousness" are correlated with very different neurological states. I attended a symposium on consciousness at the Harvard Med School a couple years ago at which the cognitive scientist Christoph Koch basically washed his hands of consciousness2 and said, "When I say, 'consciousness', this is what I mean." He then made the audience look first at a spinning, multicolored wheel, and then at an image projected on a wall screen. The image was static, a scattering of dots of different sizes; but as we looked at, different dots kept disappearing -- blinking in and out. They were in our conscious awareness, and then they weren't.

Now, I'm fairly confident that we'll one day have both anatomical and computational models of what was going on in that illusion. I think we'll also have anatomical and computational models of the differences between the waking and sleeping brain, and of the kind of almost-or-maybe-sometimes-subpersonal actions we habitually engage in, like accelerating and braking when we're driving on the highway. If one model turns out to explain all these phenomena, that would be awesome, but I wouldn't be surprised if it didn't. But whether we end up with one model, or three, or more, none of them will have anything to do with consciousness2.

It may indeed turn out that something like your third-order self -- which models the second-order self's modeling of the first-order self's interactions with the outside world -- is a precondition for making the kinds of first-person judgments and issuing the kind of first-person reports that we associate with consciousness. But even if that's the case, I would be very, very reluctant to agree that the third-order self is the repository, or the seat, or the possessor, or whatever, of qualia. I don't know that you have qualia. I don't know that I had qualia at any moment in my life before this one, or that if I did, they weren't the exact opposite of the qualia I have now. I think these are conclusions that anyone playing by our "rules of logic" must accept. And entities as mercurial as these qualia cannot possibly be empirically correlated with any phenomenon that avails itself of empirical investigation.

Which brings us to your comment on my first Gelernter post. Why, given the fact that qualia are, in principle, impossible to investigate empirically, would anyone feel obliged, or even authorized, to adjudicate their absence or presence?

That's a hard question -- one as hard as the "hard problem of consciousness" itself. Because the belief in something that is not, in principle, subject to empirical investigation ends up being argumentatively indistinguishable from the thing itself.

There are a couple answers that occur to me off the cuff. One is that, for someone who believes in the soul, it's a trivial question. Robots don't have souls; ergo, they don't have qualia. Obviously, this position has problems of its own. If I pluck out your eyes, you still have a soul, but you can't see colors. So in what sense can your soul have qualia that your body doesn't? (BTW, I'm just about at the end of my second glass of port and haven't had dinner yet, so in what follows, I'm going to try to rely on my familiarity with canonical arguments and not on the inspiration of the moment.) But for the most part, we cognitivists don't find ourselves debating this point with people who are willing to just shrug their shoulders and say, "God moves in mysterious ways."

Nonetheless, the idea that phenomenology can swing free of physical states persists in the culture, even among people who are skeptical about traditional religious dogmas. For instance, I've found that one of the best ways to explain the concept of qualia to laypeople is, ironically, to resort to the philosophically more sophisticated concept of inverted qualia: "What if my 'red' looks exactly like your 'green'? Of course, we'll both still say the word 'green' when we point at trees, and 'red' when we point at strawberries. But those are just words: our experiences are exactly the opposite of each other's."

A lot of people who otherwise have no interest at all in philosophy say they've entertained similar notions. But how could they have? What sense can the comparison be given? What's the neutral ground from which the differences between two people's qualia can be assessed? Stanley Cavell has an ingenious thought experiment in The Claim of Reason. Suppose you've been told by an omnipotent authority -- whose omnipotence has been amply demonstrated -- that some people on earth have qualia and some don't, and that later that week, an angel is going to come down and separate the humans from the so-called zombies. "This should be interesting," you think, and look forward to the big day. You're fairly sure that ditzy hot girl who wouldn't return your phone calls and your impeccably dressed but cold and aloof rival in the department are going to turn out not to have qualia. But when the angel comes down, he herds you in with the zombies! How do you react? "There's been some mistake!" You insist the angel's judgment must be wrong. But what more authoritative outside authority could there be??

Sorry ... I'm running on a bit. Blame the port. The point is just that the notion that phenomenology swings free of its physical instantiation is so deeply embedded in the culture that people accept it by default -- even though it's totally fucking incoherent.

When I was in my late 20s, I went back to school in philosophy chiefly because I was convinced that there had to be some logically unassailable way to respond to people who exclaimed in disbelief, as Wittgenstein puts it, "THIS is supposed to be produced by a process in the brain!--as it were clutching my forehead." I left academic philosophy largely because I'd come to the conclusion that the best way to respond to such people was to say, "Well, why not?"

I'm inclined to agree with Rorty that the intuition that phenomenology can swing free of its physical instantiation is not a native intuition at all but is rather a hangover of either Platonist idealism or Christian theology -- diffused into the popular culture through such vehicles as Freaky Friday. I further think that Daniel Dennett identified the fundamental problem in the philosophy of mind when he trained his sights on the unimagined preposterousness of zombies. But I am not contemptuous of Wittgenstein's imagined interlocutor, who clutches his forehead and cries, "THIS is supposed to be produced by a process in the brain!" I've felt that way myself. And I'm a little chagrined that I have no better response to offer than "What on earth made you think that it was anything else, in the first place?"

I'll try to take a look at Owen Holland's PDF this weekend, although I got wall-to-wall parties tomorrow and Sunday.