Monday, December 3, 2007

Formalism vs. Contextualism, part two

In my last post, I was trying to clarify the point of contention between Arthur and me by distinguishing what I called formalism and contextualism and explaining how I thought Arthur had blurred the distinction. I'd also like to say a little bit about why I think I tend to fall on the formalist side of the divide.

The first point to make is that I don't always fall on the formalist side. There are some early songs of Bob Dylan's, for instance, that I would be hard pressed to defend on formal grounds but that frequently have a magical effect on me. That effect has to do with Dylan's possibly unprecedented way of singing, which owes something to Woody Guthrie--a loaded association for me already, since my dad was born into the same Oklahoma dust bowl that Guthrie wrote about--and which evokes something of Greil Marcus's "old, weird America," but which, because of Dylan's intimacy with the microphone, infuses the uncanniness of The Anthology of American Folk Music with a new human warmth. Where Dylan borrows lyrics from the folk tradition, that intimacy (which, by the way, disappeared very quickly, only to reemerge in the mid-1980s) recharges them with the romantic longing that must have animated them in the first place, and I associate that longing with the photo on the cover of Freewheelin', Dylan young and charmingly innocent with a girl on his arm, a photo that evokes the excitement of Greenwich Village in the early 1960s--which I also associate with my parents' youth, which coincided with Dylan's, and which I imagine now with the same fond nostalgia I feel when I remember my own, which like Dylan's was marked by musical ambition and coffee house performances and cheap apartments and long, late talks with people who seemed eccentric and brilliant and passionate. But at the same time, I can't look at that picture or listen to those recordings without imagining the haggard Dylan of today, who sings with such rue on Time Out of Mind, "I been to Sugar Town, I shook the sugar down," or without remembering the weird incense smell of the candlelit basement room in my freshman dorm where I listened to Dylan in earnest for the first time, sitting on the floor, and where the discovery of his music seemed like a ritual, a right of passage--all of which add to the swirl of sensations and emotions that the music elicits.

I could probably go on, but the basic point is that this is one case among many where what matters most to me about a group of recordings seems to be "what they mean culturally", in Arthur's formula. I treasure the experience of listening to those recordings for all their associations. But at the same time, I feel that tracing out all those associations will do me very little good. As Arthur put it in his comment, it's "not an intellectual pursuit he [me] is interested in".

That's because any particular, magical confluence of associations is very unlikely to occur in exactly the same way again, so it's not much of a guide to future decisions about what music to buy. I take it as axiomatic that the point of arts criticism is to (1) deepen people's appreciation of familiar works of art or (2) guide them to unfamiliar works of art that they will deeply appreciate. My appreciation of those early Dylan songs could hardly be deeper, so (1) doesn't really pertain. At the same time, I've found that a singer's proximity to the microphone, or the fact that he or she is roughly my parents' age, is not as reliable a predictor of a satisfying aesthetic experience as, say, melodies of wide range that feature lots of leaps and wander out of the diatonic scale.

I realize that this could sound like a circular argument: formal properties are better than cultural meaning at predicting what music I'll like, but that's only because, for some idiosyncratic reason, I'm more intrigued by music's formal properties than by its cultural meaning. If that's true, however, then Arthur and I may not really be disputing anything; we just appreciate different aspects of music. But then, I don't really see anywhere for the conversation to go. It doesn't do a whole lot of good for either of us to say to the other, "Care about this thing that you don't care about!" Caring is something that can't be done on demand.

But the reason I write lengthy blog posts instead of just shrugging and walking away from the conversation is that I think that, for most people, formal properties really do make a difference. I think that even the trippiest hippie at Woodstock, who just wanted to make the scene and feel the peace and love vibe, probably recognized formal differences between the music of Jimi Hendrix and that of Sha Na Na, and that those formal differences probably led to aesthetic discriminations, one way or the other. One of the reasons for this blog is to try to develop a more nuanced and precise critical vocabulary for discussing formal differences. The vast majority of pop-music criticism has, in fact, concentrated on cultural meaning; I'd like to at least try to nudge my seven or eight readers toward thinking more about the notes.

Thursday, November 29, 2007

Formalism vs. Contextualism, part one

I know, I know, I've started to look like one of those people who start blogs full of enthusiasm but forget about them within months. But since my last post, I have gone to southeast Asia for two weeks, come back to find that in addition to having, basically, started a new job, I'm also editing a new section of the magazine, agreed, nonetheless, to write a story for the next issue on a topic that I didn't really know anything about, found a place to live with my girlfriend, packed up and moved all of my earthly belongings, and gone to Texas for my sister's wedding, for which, in what might laughably be called my free time, I wrote a song. The new apartment is still full of unopened boxes, I'm still behind at work, and Elise and I spent the last two days in New Haven, but dammit, I'm determined to get something up on the blog today. [When I wrote the preceding sentence, BTW, "today" meant November 18.]

My last post prompted a bunch of great comments, which I hope to at least begin to address.

It's true that for years Arthur and I have been carrying on a debate about music, and like him, I've been inclined to think of it as a clash of aesthetic principles, between what we might call formalism (my side, a concentration on the formal properties of music) and contextualism (his side, a concentration on, as he puts it, what music "means culturally").

The distinction between formalism and contextualism is, like most distinctions, somewhat specious. It's probably impossible, except maybe for people with severe autism, to attend solely to the formal characteristics of music; my preference for particular types of melodic or harmonic movement must derive, at least in part, from the cultural contexts in which I first encountered them. Similarly, it seems unlikely that someone could attend solely to the cultural context of music and make no discriminations based on formal properties. Summer-of-love hippies and straightedge punks both considered music vital to the advancement of their cultural and political agendas, but I don't think that you could have swapped the formal properties of the music--psychedelia and hardcore--without also altering the associated cultures.

But, also like most distinctions, the one between formalism and contextualism is probably useful for some conversational purposes. Arthur's comments, however, blur that distinction in ways that I don't quite follow.

That may be because I have the opposite tendency: making overnice distinctions in cases where they're not useful. It's true that, when discussing music, I tend to talk a lot about melody. But most of the melodies I find "interesting"--the one I mentioned in my last post is a notable exception--wander out of their home keys, so they also have interesting harmonic implications. And of course, varying the durations, both absolute and relative, of the notes in a melody can change its character utterly, so it's even more difficult to wall melody off from rhythm. (Again, the Rick Astley tune provides something of a counterexample: the melody of the chorus is fairly straight, with just a little syncopation each phrase. In the first phrase, the syncopation falls on the "eh" of "forever." Interestingly (maybe), if you give equal duration to all of the notes in the melody, you fall into waltz time: to GEH-ther-for EH-ver-and NEH-ver-to PART ... ONE-two-three, ONE-two-three, etc.) I think I tend to emphasize melody because, while there's a lot of pop music with good grooves or interesting harmonic features but boring melodies, there's less with interesting melodies and boring rhythmic or harmonic features. Actually, there's less pop music with interesting melodies, full stop.

Anyway, granted that I overemphasize melody in my discussion of music's formal properties, I still consider rhythm a formal property. So I'm a little confused when Arthur says that "Sometimes he [me] admits that he likes a 'groove' or something like that; we once discussed the appeal of Outkast's 'Hey Ya.' But mostly those things seem to fall into some sort of 'visceral appeal' category." Particular types of groove may very well become associated with particular cultural movements, but the same is true of harmony and melody. A bunch of added chord tones can turn almost any pop song into a jazz piece, with all of jazz's contextual associations; and the "blue notes" of blues melodies--the equivocations between the flatted and natural third, fifth, and seventh--virtually define the genre.

Nor do I think that "groove" intrinsically falls into "some sort of 'visceral appeal'" category. All music falls into the visceral-appeal category, at least initially; that's why we get into it. It's only later that we (or at least some of us) begin to analyze its formal properties. If I've spent less time analyzing the rhythmic properties of pop music, it's probably because they seemed less mysterious to me when I started writing my own songs. Jay's right that my "first exposure to popular music was relatively late and that [I] appreciated what [I] heard as a classically trained musician." But on the other hand, the classical music I was listening to was mostly Shostakovich, Bernstein, Stravinsky, and Copland. The rhythms of pop music seemed rather tame in comparison. But the melodies--I didn't really conceive of melody as a separate formal property until I started to realize how hard it was to write good pop tunes.

Okay, I have lots more to say, but I'm going to save it for later. Because otherwise, it could be another two weeks before I get this post up on the blog. But in closing, I feel obliged to point out that it was Vanessa who introduced me to Tay Zonday mere days before I mentioned him on my blog with such casual knowingness. I am justly rebuked for failing to give her credit.

Sunday, August 26, 2007

Rick Astley Has Taken Control of Your Computer

It's probably a gross failing on the part of Firefox's developers that when I opened this link in a new tab, I couldn't shut it again without force-quitting the program, but as malware goes, this is pretty benign stuff, and its comedic value probably makes up for any inconvenience it causes.

No doubt this video was chosen as an illustration of all that is most annoying about '80s pop music, and Rick's combination of black turtleneck and weirdly high-collared trenchcoat, his absurdly peppy dancing, and the cuts to the groovin' African-American bartender to give him some street cred are pretty damning -- even if we manage to forget for the moment that he was the most unlikely physical specimen to emit such deep and resonant tones until Tay Zonday. Nonetheless, I would like to say a few words in defense of the much-maligned Mr. Astley.

I find this song kind of catchy, but his other inescapable hit from the '80s, "Together Forever", is one I actually went to the trouble of pirating and uploading to my iPod. You remember it -- "Together forever and never to part, together forever, we two."

That first line begins on the 5th scale degree, moves up to 6 and down again to 4, then leaps up a seventh to 3 on the syllable "for". (Ah, it's so nice to be able to wax technical and know that I'm not losing my audience, because they've all made such careful study of my music primers.) I hope to have occasion in the near future to rhapsodize about melodic leaps of a seventh, but suffice to say that they don't happen all that often in pop music, they're wonderful when they do, and 4 to 3 is a more interesting seventh than the more common 5 to 4 or 1 to flat-7. "Ever" comes down from 2 to 7 -- the 7's relationship to the melody's lowest tone (so far) insinuating a tritone, my other favorite melodic interval. Two words, and we've already covered six of the seven notes of the key. (Compare, for instance, James Blunt's "You're Beautiful", which I intend to slag off on this blog, and which consists almost entirely of three or four notes.) "Never" lowers the melody's floor from 4 to 3 -- expanding the range of notes it covers. "To" is another leap of a seventh (hurray!), from 3 to 2, in what the music theorists call a sequence -- a repetition (or at least an approximate repetition) of the preceding pattern of pitches, but begun on a different pitch. "Part" brings us, finally, to the only note in the scale that the melody has not traversed so far -- the root, the tonic, the "home base" of the key.

I submit that the melody to the lyrics "together forever and never to part" in Rick Astley's song "Together Forever" is about as interesting a pitch sequence as the major scale has to offer. And I think it's because I concentrate, when listening to pop music, more on things like pitch sequences and less on things like the singer's hair and the cheesy synth arrangements that my tastes so frequently confound my friends' expectations. (I sometimes suspect that that's also the reason nobody seems to like the songs I write as much as I do.)

Friday, August 24, 2007

These Are Moving Pictures; the Camera Should Move

In his recent elegy for Ingmar Bergman, Woody Allen says,
In film school (I was thrown out of New York University quite rapidly when I was a film major there in the 1950s) the emphasis was always on movement. These are moving pictures, students were taught, and the camera should move. And the teachers were right.
Back when I worked for a film company (1993-1995) and was writing screenplays for movies I planned to direct--none of which ever got made, of course--I hewed to the same principle, although I had arrived at it through my own devices. To that principle--"the camera should move"--I appended a corollary: no shot--no camera angle or composition--should be repeated, unless the repetition itself has some formal significance--to indicate stasis, say, or to recollect an earlier scene.

Few film directors have much allegiance to either of these dicta. They prefer to concentrate on things like story, character, psychology, emotion, whatever. Nevertheless, movie history is studded with the names of directors reputed to be great movers of the camera. The opening shot of Orson Welles's Touch of Evil is justly celebrated; less well known is another one-shot scene later in the movie--the one where Welles's Quinlan plants evidence on an incidental character. Less flashy if no less virtuosic, the second shot is just as well motivated narratively as the first: its continuity allows you to see that sticks of dynamite have magically appeared in a box that was previously empty.

But neither of these shots is the tour de force that was the ballroom scene at the heart of Welles's earlier film The Magnificent Ambersons--before it was butchered by the studio. Welles made Ambersons when he was still riding high on the success of his radio show and commanded the biggest budgets in Hollywood history; the single shot that was to constitute the ballroom scene originally lasted 10 minutes. Much of the scenery was devised to be lifted away as soon as it disappeared from the camera frame, to make room for the track that was being laid down as the camera moved, and for the camera itself. For the most part, the camera followed the movie's two main characters, but there were occasional divagations. One involved a conversation among a random assortment of upper-crust party guests about a recent, fascinating, but also kind of frightening import from Europe, which no one could quite summon the courage to sample: the olive. It's exactly the kind of period detail that novelists relish, and it even had thematic significance, indicating both the opulence of the world in which the Ambersons moved--they were the first to be able to afford an imported delicacy--and its quaint antiquity. The studio complained that the conversation did "nothing to advance the story" and cut it, along with a couple other segments of the shot. It is one of my fondest hopes that I will live to see digital technologies progress to the point that the scene can be reconstructed, from the surviving stills and script and from samples of the movie's other scenes.

The early Renoir was a great mover of the camera, and even devised his own technology for tracking shots, a set of reconfigurable, interlocking, polished wood platforms over which a camera mounted on felt feet could slide, but he claimed he had to stop using it because it violated union guidelines. Tarkovsky, Mizoguchi, Minnelli, Demy, Ophuls -- all were masters of the tracking shot. But to me, the most virtuosic mover of the camera is Luis Bunuel.

Lots of cinephiles are shocked when I say this. Bunuel is thought to have a rather dry style, and indeed, he seems to deal mostly in medium shots, which have neither the drama of the closeup nor the pathos of the long shot. But his camera is always moving. I pointed this out once to a guy who was teaching a class on film appreciation at the Cambridge Center for Adult Education, and he said, "That's not moving, that's framing." By which he meant, Bunuel's camera movements are generally motivated by his characters' movements. Fair enough. But who decides the characters need to move? In a lot of movies, they don't. They stand or sit, and the camera cuts back and forth between them, in what, in my film days, I would disparagingly refer to as "composition tennis". Bunuel finds reasons to make his characters move precisely to have a reason to make the camera move.

A good example is the opening of Discreet Charm of the Bourgeoisie, about three minutes into the Criterion disc. The scene in the Senechals' house takes exactly two shots, and the characters are constantly moving about, dragging the camera with them. Indeed, sometimes, when you start paying too much attention to Bunuel's direction, his shots begin to seem incredibly contrived, with characters moving into the background and positioning themselves so that they exactly fill in the visual gaps between characters in the foreground. But of course, if you're watching as you normally would in the cineplex, you hardly notice what the camera is doing. You just find that you have a very clear sense of the three-dimensional space of the scene, and a general impression of elegance.

All of this is apposite because Criterion -- God bless Criterion -- has just released a DVD of Bunuel's Milky Way, my favorite of his films. In it, he makes my corollary to the NYU aesthetic principle -- don't reuse a shot once you've left it -- a structural conceit, disdaining to reuse settings and even, with the notable exceptions of the two protagonists, characters once he's left them. He actually takes this structural principle even further in The Phantom of Liberty. Perhaps he takes it too far -- or perhaps, without the ready-made imagery of the history of the Catholic Church, he's just unable to repeat the combination of comedy and pathos that he manages so brilliantly in Milky Way. Either way, I've always found Phantom the lone disappointment among his late, European films. But The Milky Way is a masterpiece.

Wednesday, August 8, 2007

Music primer, part (hopefully) the last

Okay, it occurred to me that I would probably have regular recourse to a couple more music-theoretical ideas, so I should just go ahead and get them out of the way now.

Relative and parallel minor

I mentioned in my last post that the natural-minor scale is a permutation of the major scale -- the major scale begun on the sixth scale degree and wrapped around on itself. That means that for any given major key -- C, E, B-flat -- there is a minor key that uses all the same notes. On the piano, the C-major scale uses all white keys; so does the natural A-minor scale. The E-major scale uses black keys at F-sharp, G-sharp, C-sharp, and D-sharp; so does the natural C#-minor scale. Etc.

If you read the section on whole and half-steps carefully, you will have noticed that the minor scale that shares all its notes with a given major scale begins a minor third down from the first note of the major scale. A is a minor third down from C; C# is a minor third down from E. The minor scale that shares all its notes with a given major scale is called the relative minor of the major scale; the major scale, naturally, is the relative major of the minor.

But of course, you can build a minor scale on any note, just as you can build a major scale on any note. You just have to make sure to follow the pattern of whole and half-steps we established last time:


So you can build a minor scale that starts on C or E, too. But those scales will use different notes than the major scales starting on the same notes and, perforce, different notes than the major scales' relative minors, too. On the piano, the minor scale built on C uses black keys at E-flat, A-flat, and B-flat; the minor scale built on E uses only one black key -- at F#. What are the relative majors of C-minor and E-minor? Count up a minor third from the first note of each scale (C and E); answer below.

The minor scale that begins on the same note as a given major scale is called the parallel minor. The major scale that begins on the same note as a minor scale is, of course, the parallel major.


I want to say at least a little about harmony (chords). As I mentioned in my first post on music theory, a chord is a set of notes played simultaneously. Any set of notes can constitute a chord, but in classical music of the classical period (not the pleonasm it seems, pending some better term than “classical music”), and in the vast majority of pop music, the chords that predominate are what used to be called “common chords” -- the major and minor triads. Technically, any chord with three notes is a triad, but musicians generally use the word to denote three-note chords constructed from stacked thirds.

By “stacked thirds” I mean that the triad’s second note is a third above its first note, and its third note is a third above its second note. If you were paying attention to my discussion of intervals, however, you’ll recall that a third can be either major or minor, i.e., it spans either four or three half-steps. Two types of thirds give you four types of stacked-third triads, named as follows:

major third on major third: augmented triad
minor third on major third: major triad
major third on minor third: minor triad
minor third on minor third: diminished triad

Of these four types of triad, however, the major and the minor are by far the most common. If you’ve ever sat down to learn a couple chords on the guitar, you were probably learning to play major triads, with possibly a few minor triads thrown in. If you can play “Heart and Soul” on the piano, you can play a few major triads. Etc.

Qualitatively, major triads partake of the brightness of the major scale; minor triads partake of the melancholy or ominousness of the minor scale. If you know any pop songs that have a kind of spooky or gloomy feel to them, they probably feature a lot of minor triads.

There are seven notes in the major scale, so there are seven natural triads in any major key. (For instance, the natural triad built on the first scale degree would consist of the notes 1, 3, and 5; the triad built on the third scale degree would consist of the notes 3, 5, and 7.) Of the seven natural triads in a major key, three are major triads, three are minor triads, and one is a diminished triad.

The three natural major triads are the ones built on the 1st, 4th, and 5th scale degrees. The triad built on the first scale degree is known as the tonic, and it’s kind of the “home base” for the key: most pop songs that are written in a single key probably start on the tonic, and almost all of them end on the tonic. The chords built on the 4th and 5th scale degrees are called the subdominant and dominant, respectively. As their names imply, they are very closely related to the tonic. If you’ve ever heard the term “three-chord pop song”, the three chords in question are the tonic, dominant, and subdominant.

The natural triads built on the 2nd, 3rd, and 6th scale degrees are minor triads. That means that a major triad built on one of those scale degrees perforce takes you into a different key (or at least into a different mode).

Finally, I’ll just mention that the next most common chords after the major and minor triads also consist of stacked thirds; it’s just that the stacks keep getting higher. A seventh chord, for instance, consists of a triad with another third stacked on top of it. (The second note of the chord is a third above the first note; the third note is a fifth above the first note; and the fourth note is a seventh above the first note, hence the chord’s name.) A ninth chord consists of a seventh with another third stacked on top of it. Etc. Sevenths are very common: in any given key, the seventh chord built on the fifth scale degree -- the dominant seventh -- is almost as common as the natural major chords. Again, if you’ve ever fooled around with the basic chord shapes on a guitar, you probably learned a couple dominant sevenths.

Answer key: the relative major of C minor is E-flat major; the relative major of E minor is G major.

Monday, July 23, 2007

Music primer, part two

Half-steps and whole steps

As I mentioned in my last post, the major scale has seven notes in it: if you start on middle C and play up the keyboard, you'll play a total of seven notes before you reach the next C. As I also mentioned in my last post, you will also skip five black keys.

Each of those black keys lies between two white keys. But obviously, if you have five black keys distributed among seven white keys, there are a few white keys that don't have black keys between them. The relationship between adjacent white keys that aren't separated by a black key is different from the relationship between adjacent white keys that are. Adjacent white keys separated by a black key are a whole step apart. Adjacent white keys that aren't separated by a black key are a half-step apart.

The difference between the vibrational frequencies of notes a whole step apart is greater than the difference between the vibrational frequencies of notes a half-step apart. As you might imagine, there is a sense in which the distance between notes a whole step apart is twice that of notes a half-step apart. But it's a rather technical sense that I don't want to get into here. (If you're interested in reading more on the subject, you might start with this Wikipedia entry.)

The relationship between whole steps and half-steps is easier to see on the neck of a guitar than on the keys of a piano. The metal frets embedded in the guitar neck mark off consecutive half-steps. To play the melodic interval of a whole step on the guitar, you have to jump across two frets; one fret will take you only a half-step away.

Finally, the distance between a white key on a piano and the black key next to it is a half-step. So you can see that the octave (eight-note span) from C to C -- on a keyboard or on a guitar -- is actually divided into 12 half-steps. On the keyboard, most of those half-steps are between white keys and black keys, but two of them -- from E to F and from B to C -- are between white keys. (There are in fact good mathematical reasons that Western music divides the octave into 12 equal half-steps.)

The major scale revisited

Armed with the notion of half-steps and whole steps, we can make a little better sense of the notion of a major key.

Play up the major scale from middle C to the C above it (all white notes). There's a black key between C and D, so the first step of the scale is a whole step. Same with D to E. But there's no black key between E and F, so that's a half-step (can you hear the difference?). Whole step to G, whole step to A, whole step to B -- then a half-step back to C. So the pattern of whole and half-steps that gives you a major scale is


That's why you need black keys if you start your major scale on any note other than C. Start on D. Your first whole step takes you to E. But E to F is only a half-step, so your second whole step takes you to F-sharp. Then comes a half-step: G. A, B, no problem -- but now you've found the other white-note half-step, B to C. So your next note has to be C-sharp, not C. And a last half-step will bring us back to do.

Make sense?

The same procedure, of course, applies to the neck of the guitar. You can start your scale on any fret you want. The next note will be two frets (a whole step) up. The one after that will be two more frets up. But the one after that (the first half-step) will be only one fret up. Etc.

The procedure works in exactly the same way no matter what fret you begin on. That's why pop guitarists tend to be less clear on the theoretical differences between keys than pianists: changing key on the guitar is just a matter of starting the same pattern on a different fret; each key on the piano has its own distinctive pattern.

Intervals revisited

The concept of whole and half-steps also lets me clarify some distinctions I elided in my last post. I mentioned that in any given major scale (I hope that the principle of the major scale is now clear enough to you that I can stop using C major as a reference point; C is, after all, just one of 12 major scales, none of which should, in principle, be privileged over any other), the distance from 1 to 6 is a sixth. The distance from 3 to the 1 above it is also a sixth. But they're not the same sixth. The sixth from 1 to 6 is a major sixth; that means there are nine half-steps from 1 to 6. But there are only eight half-steps from 3 to 1, making it a minor sixth. Here's the mapping of total half-steps spanned to intervals:

1: minor second
2: major second
3: minor third
4: major third
5: perfect fourth
6: augmented fourth/diminished fifth
7: perfect fifth
8: minor sixth
9: major sixth
10: minor seventh
11: major seventh

The interval of the diminished fifth (6 half-steps) is commonly called the tritone, or less commonly, the diabolus in musica. For a long time, it was considered a gross dissonance, to be avoided. In the 20th century, it was to some extent rehabilitated, but melodies that emphasize tritones can still sound sharp and spiky to the modern ear.

Other scales

So the major scale goes WWHWWWH. But you could, if you wanted, make your own scale out of some random sequence of whole and half-steps -- WHHWWHHW, or whatever. Twentieth-century jazz and classical composers experimented widely with the whole-tone scale (WWWWWW) and the octatonic scale (WHWHWHWH), but by far the most common scale other than the major is, unsurprisingly, the minor.

The minor scale

The basic minor-scale pattern is WHWWHWW. (It has variants, but I'm not going to get into them.) The distinctive thing about it is that the interval from 1 to 3 is not a major third; it's a minor third, the interval between "dead" and "and" in the schoolyard incantation "pray for the dead, and the dead will pray for you". Music written in minor keys tends to have a more melancholy, or brooding, or ominous, or menacing feel than music written in major keys: recall Nigel Tufnel's sage observation that D minor is "the saddest of all keys." Play any peppy tune you know on the keyboard with the third scale degree knocked down a half-step, and it will come out much less peppy*. The Christmas hymn "O Come, O Come, Emmanuel" spends a lot of time in a minor key -- as does Britney Spears's "Oops I Did It Again". Both songs occasionally slip into major keys, however, for reasons that I hope will come clear in the next section.

The modes

Take a look at the minor-scale pattern of whole and half-steps. The minor scale, like the major scale, starts over again at the octave. So two octaves of the minor scale will look like this:


Trace out the eight-note pattern beginning on the third scale degree of the minor scale instead of the first. That is, knock off the first two and the last five letters:


Look familiar? Yes! It's the major scale! The minor scale is just the major scale begun on a different scale degree and wrapped around on itself. The converse is also true: the minor scale is the major scale begun on a different scale degree and wrapped around on itself.

Another way to say the same thing is, if you play an A on the piano, and play up the next seven white keys, you will have played the minor scale. (The white keys give you the major scale only if you start on C; they give you the minor scale only if you start on A.)

Note that starting the major scale on a different scale degree and wrapping it around will not give you the whole-tone or the octatonic scales: they have fundamentally different patterns (the whole-tone scale has no half-steps at all, so of course it can't give you the major scale). But if there are seven notes in the major scale, there must, perforce, be seven different "wraparound" scales (2 to 2, 3 to 3, 4 to 4, etc.). These wraparound scales are called modes. Two of them -- 1 to 1 and 6 to 6 -- are our familiar major and minor scales. Of the remaining five, three have minor thirds between 1 and 3, and two have major thirds between 1 and 3. The ones with minor thirds partake of the minor-scale melancholy; but the ones with major thirds have distinctive flavors all their own, and it's on those two that I will concentrate in this blog.

The two major-third modes (other than the major scale) are the lydian and the mixolydian. These are the scales that arise when you start on F and G, respectively, and play up the next seven white keys. They are also the modes that follow the following permutations of the major-scale sequence of whole and half-steps:

lydian: WWWHWWH
mixolydian: WWHWWHW

Each scale differs from the major scale in only one respect. The lydian mode has a raised fourth degree relative to the major scale. That is, in the lydian mode, 4 is a half-step higher than the 4 of the major scale. This makes the interval between 1 and 4 a tritone, which gives lydian melodies a piquant sound. The mixolydian mode has a lowered seventh degree relative to the major scale, a similarly fateful alteration. In the major scale, you'll recall, 7 is only a half-step below 1. That close proximity gives the 7 a feeling of kind of leaning toward the 1. It's hard to describe but very easy to hear -- it's the sense in which it instinctively brings us back to do. Widening that interval removes that leaning feeling, which drastically changes the color of the scale.

Because all the modes consist of the major scale wrapped around on itself, it requires a certain amount of effort on the part of the composer to keep modal melodies from simply drifting back into the major: our ears, conditioned by so much major-key music (and possibly predisposed by evolutionary adaptations), tend to pull as back to the familiar (or perhaps the instinctive).

Okay, I think that's gonna do it. I may want to say more about harmony at some later point, but then again, I may not. The distinction between different keys may be as much of a harmonic distinction as I'm going to need to make.

* I'm sure that the popular Boston band the Dresden Dolls has some tunes in minor keys. I don't really know their music, but years ago, when Amanda Palmer was still busking in Harvard Square as the 12-foot bride, I saw her play some of her songs, solo, and without makeup or carved eyebrows, at the original Zeitgeist Gallery on Broadway in Cambridge. After she'd played about four or five songs in a row in minor keys, I yelled, "Play something in a major key!" She thought for a minute and said, "Hm, I don't think I've written anything in a major key since I was 17," and I said, "Back when you could still believe in major keys."

Monday, July 9, 2007

Music primer, part one

Before I make any music-themed posts on this blog, I want to explain a few technical terms that I expect I'll occasionally want to invoke. They're not difficult, but some readers may be unfamiliar with them or have only a vague notion of what they mean. I assume a passing familiarity with the layout of the piano keyboard. If you don't have a keyboard handy and find any of the descriptions below difficult to visualize (or "auralize"), try playing with the little Flash keyboard here. (If you don't have Macromedia Flash installed, there's also a Java piano here.)

The major scale

Most people, I think, know how to find middle C on a keyboard and know that, if you play the next seven white keys in sequence, up the keyboard, you'll spell out the do re mi scale familiar from The Sound of Music ("Do, a deer, a female deer, re, a drop of golden sun," etc.). The last note in that eight-note sequence is another C -- not middle C, but the C an octave (a span of eight notes) above middle C. That is, the do re mi scale -- a.k.a. the major scale -- has only seven notes in it; with the eighth note, you're starting the scale over again, only higher ("that will bring us back to do").

In this blog, I will refer to the notes of the major scale by number. So do is 1, re is 2, mi is 3, etc. Ti ("a drink with jam and bread") is 7, which brings us back to do, or 1.

If you play middle C, and then play the next four white keys, up the keyboard, in sequence, you'll get to G, or 5. But if you play middle C and then play the next three white keys down the keyboard, you'll also get to G, or 5. For every 1, there's a 5 above and a 5 below. There's also a 4 above and a 4 below, etc. And for every 5, there's a 1 below and a 1 above. Etc., etc.


An interval is the distance between two notes. We call the distance from 1 to the 5 above it a fifth: the total number of white keys you have to press to get from middle C to the G above it is five. The distance from 1 to the 5 below it, however, is a fourth: the total number of white keys you have to press to get from middle C to the G below it is four. Conversely, the interval from 1 to the 4 above it is a fourth, while the interval from 1 to the 4 below it is a fifth. The interval from 1 to 6 is a sixth, from 1 to 3 is a third, etc.

What's the interval from 5 to the 3 above it (from G to E)? Well, if middle C is 1, how many white keys do you have to press to get from the 5 below middle C (G) to the 3 above it (E)? If you can't figure the answer out in your head, try actually pressing the keys, and then check your answer against the one at the end of this post.


It so happens that the first line of the Christmas carol "Joy to the World" traces out the major scale -- from 1 back down to the 1 below it. Sing it to yourself: "Joy to the world, the Lord is come." You sing the word "joy" on 1, "world" on 5, "lord" on 3, and "come" on 1 again. If you play the eight white keys from the C above middle C back down to middle C in the right rhythm, you'll play the opening line of "Joy to the World."

But let's say that, instead of starting on a C, you start on the next white key above C -- i.e., D. Now, if you just play down the white keys, the tune will sound completely wrong. In order to make it sound right, you'll have to throw in some black keys -- specifically, at "to" and "lord".

If, instead, you started playing "Joy to the World" on E, you'd need four black keys to make it sound right, and if you started on B, you'd need five!

There's a fundamental principle here, one that I've found is not intuitive for nonmusicians. Of everything I've said on this page, it's the most important thing to remember (if you don't know it already): no two major scales use the same notes. If you start your major scale on C, you can use all white keys -- but that's not true for scales begun on any other note. If you start your scale on G or F, you need only one black key -- but it's not the same black key. That is, if you play "Joy to the World" starting on F, you'll need a black key at "the" -- B-flat; but if you play "Joy to the World" starting on G, you'll need your black key at "to" -- F-sharp. (Try it.)

So, a few definitions:

a melody is a set of notes played in sequence;
a chord is a set of notes played simultaneously;
music written in a particular key is music all of whose melodies and chords use the notes of a single scale.

If you play a piece that's entirely in the key of C on the piano, you'll use all white keys. If you play a piece that's entirely in the key of G, you'll use one black key: F-sharp. If you play a piece that's entirely in the key of F, you'll use one black key, but not the same black key: B-flat. Etc., etc.

If you play middle C, then play the next six notes up the keyboard (stopping just shy of the next C), you will have played seven notes total: each of those notes determines a unique major scale, so each of those notes determines a unique major key. You will also, however, have left out five black notes. Each of those notes also determines a unique major key. So there are 12 major keys total.

An experienced musician can tell from a handful of notes what key a particular piece is in. Indeed, she can tell from only three notes what key a piece is in, if they're the right three notes. For instance, there are seven different major keys that contain the note C: C, D-flat, E-flat, F, G, A-flat, and B-flat. But five of those -- the ones with "flat" in their names, plus F -- contain the note B-flat instead of the note B. So if a melody begins on C and moves to a B (not a B-flat), it is in one of only two possible keys: C or G. The C and G scales, in turn, differ by only one note: the C scale contains an F, but the G scale contains an F-sharp. So if a melody contains only three notes, and they're C, B, and F, then the melody must be in the key of C. (Note that, by contrast, if the melody contains the notes B, C, D, E, G, and A, it could be in either C or G.)

This post has taken me a lot longer to write than I anticipated, because I'm trying to be both accurate and accessible. I'll be back with more Music Theory 101 in the coming days.

Answer key: a sixth

Tuesday, July 3, 2007

Brief dispatch from the front

What did you do with your Friday night? I spent mine drinking the port that my friends Arkadi and Nancy gave me and composing a reply to Charlie Greenbacker's comment on my "Gelernter wrapup" post. If you stopped back here expecting more ruminations on consciousness, you might want to look at what I wrote -- although I should warn you that it's a little, uh, looser in both diction and argument than an official blog entry would be. (It seemed, however, not quite in the spirit of the blogosphere to go back and delete the obscenities and tipsy hyperbole.)

I also think I should mention that the next issue of Technology Review will feature an essay by Daniel Dennett, so both the anticognitivists and the cognitivists are getting a fair hearing in its pages.

Thursday, June 28, 2007

Gelernter Wrapup

A few more remarks about David Gelernter's essay in Technology Review, which I hope won't run as long as the ones I made yesterday but probably will.

First, a little terminology, for anyone who hasn't read or has only skimmed the essay. Gelernter uses the term "consciousness" to denote the possession of what philosophers call qualia. He's not talking about the differences between the brain states of waking and sleeping animals, and he's not talking about self-consciousness -- an animal's ability to recognize itself in a mirror, or to use the states of its own body (including its brain) as subjects for further cognition.

Qualia are (purportedly -- I'd like to think this post casts doubt on the very intelligibility of the idea) the felt character of experience. When my thermostat registers a certain drop in temperature, it throws on the heat. Similarly, when I register a certain drop in temperature, I throw on a sweater. But unlike the thermostat (the story goes), I feel cold. This feeling is not reducible to either the average kinetic energy of the air molecules around me or the physical act of putting on clothing: it's its own thing. On this picture, every human perception or sensation has an associated quale (the singular of qualia): the painfulness of pain, the redness of red things, the coldness of cold. To be conscious, in Gelernter's sense, is to have qualia.

Gelernter divides artificial-intelligence theorists into two camps: cognitivists and anticognitivists. Cognitivists believe that, if human beings have qualia (an important if!), then a robot that behaves exactly like a human being (even if its body is vinyl and its "brain" is a huge Rube Goldberg machine made of tinker toys) does, too.

Okay, so armed with these distinctions, let's take a look at a couple of Gelernter's initial claims:
(1) "This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious."
(2) "we know our fellow humans are conscious"
Amazingly, these claims occur in immediate succession. How are we to reconcile them? Are human beings not "entities"? Let's assume they are. It follows that Gelernter is defending some form of "knowledge" that stands in no need of -- indeed, does not even admit of the possibility of -- objective justification.

What are we to do with claims to such knowledge? Are we under any obligation to take them seriously? Do they even require rebuttal? If they aren't anchored in any objective criteria at all, how could they be rebutted? Indeed, they can't. They can simply be denied.

And this is the position in which cognitivists and anticognitivists find themselves: simply denying each other's unfounded knowledge claims. The anticognitivist says, "We know our fellow humans are conscious." And the cognitivist says, "No we don't -- at least, not in any way that we don't also know that a perfect behavioral simulacrum of a human is conscious."

Gelernter refuses to acknowledge, however, that he and his disputants have reached such an impasse. He insists that the consciousness of his fellows is something he deduces. "We know our fellow humans are conscious," Gelernter says,
but how?...You know the person next to you is conscious because he is human. You're human, and you're conscious--which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.
If there is an argument here, however, it is entirely circular: the sole criterion for ascribing consciousness to our fellow humans is -- they're human!

Gelernter then moves on to the Chinese room, which I discussed yesterday. After rehearsing Searle's argument, however, he adds that
we don't need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?

Well, what does a computer do? It executes "machine instructions"--low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), "branches" (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions."
The obvious cognitivist rejoinder, as I mentioned yesterday, is that neurons just relay electrical signals, faster or slower, and emit higher concentrations of this or that neurotransmitter. Everything brains accomplish is built out of these primitive operations. If consciousness can emerge from the accumulation of mechanistic neural processes, why can't it similarly emerge from the accumulation of mechanistic computational processes? Again, Gelernter responds by simply identifying consciousness and humanness, without any argumentative support:
The fact is that the conscious mind emerges when we've collected many neurons together, not many doughnuts or low-level computer instructions.
I.e., the sole criterion for ascribing consciousness to collections of neurons, rather than collections of logic gates, is -- they're neurons! QED.

If Gelernter were to read these posts and conclude that, in fact, his essay consisted entirely of non sequiturs and circular arguments, neither of which I think is likely, I would nonetheless expect him to maintain his anticognitivist stance. While cognitivist arguments can, I believe, show that anticognitivist arguments prove nothing, neither do they prove anything themselves. But as a Wittgensteinian pragmatist, I take this to show that the distinction between cognitivism and anticognitivism is meaningless. I agree with Gelernter's assertion that "there is no objective way to tell whether some entity is conscious", whether, ultimately, he himself does or not. And I think that the upshot is that the very idea of consciousness -- in his sense, consciousness as the possession of qualia -- is one on which we can get no intellectual purchase.

Wednesday, June 27, 2007

Uplift the bytecode!

MIT's Technology Review magazine has published a long essay by Yale computer scientist David Gelernter that addresses some of the best-trodden arguments in the philosophy of mind with somewhat less aplomb than you might expect from a bright 11-year-old. This is mildly distressing to me, both because the central topic of the essay -- the possibility of conscious machines -- is one to which I've devoted a lot of time and energy and because in my day job, I'm a copy editor at Technology Review. So what follows may be treasonous. On the other hand, I've read the essay carefully, several times, so I'm intimately acquainted with all its flaws. (I should add that Gelernter appears to have been delightful to work with, and that for all I know, he's a brilliant computer scientist. But if he is, then his susceptibility to circular argument and non sequitur suggests that there may be more to the notion of philosophical training than we Wittgensteinians/Rortians tend to think there is.)

Gelernter appears to swallow whole what I'll call the Original Statement of John Searle's "Chinese room" thought experiment. The Original Statement should be distinguished from succeeding restatements because it, unlike them, is transparently fallacious. (I think that the restatements also fail to make the point Searle and others hope they will, but I agree with Rorty that their proponents and opponents beg all questions against each other. Or almost all.)

In the Original Statement, Searle asks us to imagine that someone has devised a computer program that can pass the "Turing test" in Chinese. That is, a native Chinese speaker typing questions and remarks into a computer and receiving replies generated by the program would be unable to tell whether or not she was actually instant-messaging another person. Now suppose that, instead of executing the program on a computer, Searle executes it by hand. He's locked in a room -- the Chinese room -- and sets of Chinese symbols are slid to him under the door. According to instructions in a thick manual, he correlates the symbols he receives with another set of Chinese symbols, which he slides back under the door -- the program's output.

Searle doesn't understand a word of Chinese; he's just lining up symbols with symbols (a process that may require a few pencil-and-paper calculations). And from this he concludes that the room doesn't understand Chinese either.

Now, I would have thought that the fallacy of that conclusion was obvious, but history has shown that it isn't. Who cares whether Searle can understand Chinese? He's just a small and not very important part of the system -- what Dan Dennett has called a "meat servo" -- and it's the system that understands Chinese.

Searle's role is analogous to that of the read/write head in the magnetic-tape memory of an old computer -- or perhaps the laser diode in the CD tray of a modern-day Dell. His job is just to fetch data and shuttle it where he's told to. Saying that the Chinese room can't understand Chinese because Searle can't is like saying that my computer can't play chess because the diode in the CD tray can't.

In the paper in which he proposed the Chinese-room thought experiment, Searle actually anticipated this objection (which might make you wonder why he bothered with the Original Statement at all), which he sensibly called the "systems response". I don't find his rejoinder to the systems response convincing, but for present purposes, that's irrelevant. Because Gelernter doesn't even get that far.

After declaring, "I believe that Searle's argument is absolutely right", Gelernter goes on to propose a thought experiment of his own, one that runs, in part, as follows:
Of course, we can't know literally what it's like to be a computer executing a long sequence of instructions. But we know what it's like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it's like to be a computer running a sophisticated AI application, sit down and sort cards all afternoon. That's what it's like.
Well, no, Dave, that's not what it's like. Again, that's what it's like to be the CPU. But the CPU, like Searle in the Chinese room, is just a small part of the system.

Gelernter's argument is analogous to saying, "The corpus callosum shuttles electrical signals between hemispheres of the brain. You want to know what it's like to be a corpus callosum? Well, imagine standing next to a computer with a USB thumb drive plugged into it. When the computer sounds an alert, you take the USB drive out and stick it in another computer. When that computer sounds an alert, you stick the USB drive back in the first computer. That's what it's like to be a corpus callosum. Therefore humans can never be conscious."

Notice that I am not here making the standard argument that neurons and neuronal processes, taken in isolation, are every bit as mechanistic as logic gates and binary operations. (I'll take that one up tomorrow.) Instead, I'm reproducing what we might call the synecdochal fallacy, common to both Searle and Gelernter, of substituting the part for the whole.

I'm sure that at this point I've taxed the patience of anyone who's not as much of a phil o' mind nerd as I am, so I'll stop for now. But tomorrow I'll address a couple of Gelernter's fallacious arguments that are all his own.

AMENDMENT (6/28/07, 5:20 p.m. ET):

A correspondent (who shall remain nameless) objects to the following line:
"He's just a small and not very important part of the system -- what Dan Dennett has called a 'meat servo' -- and it's the system that understands Chinese."
The objection is this:
"It's no good saying, 'The system understands,' because that's what's at issue."
It's a good point and may suggest that philosophy, which demands an incredibly high level of linguistic precision, should not be undertaken in blogs. But I plan on ignoring that suggestion, in the hope that my readers will read me with charity.

What I should have said, instead of "it's the system that understands Chinese", is
"It's the system's ability to understand Chinese that's in question."
The point was just that the Chinese-room thought experiment falls prey to the synecdochal fallacy. I didn't mean to imply that the refutation of the Chinese-room argument proves the possibility of conscious machines.

Tuesday, June 26, 2007

Title: Title

The name of this blog comes from a poem by Philip Larkin, the conclusion of which Virginia Heffernan reproduces here. As Ginny points out (does anyone call her Ginny? I don't know; I don't know her. But "Virginia" sounds too formal for a blogospheric cross reference, to say nothing of "Heffernan".), Richard Rorty made much of the phrase "blind impress" in his book Contingency, Irony, and Solidarity, which is why it lodged in my mind (although I had my own Larkin fixation before I started reading Rorty, thank you very much). Before my band, the Hopeful Monsters, fell apart recently, we had planned to cut a new album, and I'd been secretly scheming to call it Blind Impress. Maybe I'll still use that title if I ever make another CD, but in the meantime, thanks to my friend Tim, I guess I've found another way to use it as a personal slogan.

Rorty used "blind impress" to describe collections of his titular contingencies -- the biases, beliefs, obsessions, and convictions that a person acquires over a lifetime. His point in using the word "contingency" is that the forces that shape us are arbitrary and historically conditioned; he hoped the idea of a "blind impress" would replace that of an "intrinsic nature", just as the notion of a "historically conditioned bias" would replace that of "apprehension of ahistorical truth/virtue through the uniquely human faculty of reason".

So there are several reasons that I think Blind Impress makes a good title for my blog. The first, obvious one is that I'm going to be writing about, among other things, philosophy and literature, and I'm sympathetic to both Rorty's philosophical stance and Larkin's aesthetics. Another is that Rorty's notion of contingency spares me the trouble of trying to find something common to music, literature, film, and philosophy that lets me rope them off from the rest of culture -- from, say, painting and economics. There is no such common feature: these just happen to be the things I'm interested in. (Actually, I'm interested in painting and economics, too. I just don't feel I have the authority to address them. In the four areas I'm restricting myself to, I think I know what I'm talking about.)

Another reason is that I want to emphasize that, in making the aesthetic and philosophical judgments that I am surely going to make, I am aware that I'm simply indicating my own historically conditioned biases. (If I hadn't wanted to be thought smart, I probably wouldn't have fought my way through Ulysses for the first time; if I'd been better at sports in junior high, I probably wouldn't have placed so much value on being thought smart; etc.) That of course raises the question of why I would consider it worthwhile to attempt to broadcast those judgments in the first place. All I can say is, I've profited from engaging with other people's attempts to justify their own arbitrary biases, and I hope other people will profit from engaging with mine.

Finally, there's the reason that I wanted to use "Blind Impress" as an album title in the first place. This may not end up having a lot to do with this blog, but the songs that I've been writing for the last couple years adopt, I think, a slightly removed perspective on their subjects. They pull back a little from the immediate passions or contending systems of values that they describe and attempt to locate them in a larger ecosystem of contingencies. Some people may consider that a defect, but whatever. It's the stance that I've been historically conditioned to adopt.

Monday, June 25, 2007

Tim told me to create a blog

I'm just doing what I'm told.