Trading Fours

Trading FoursThere are few more thrilling experiences in musical performance than watching jazz players trade fours. Each musician plays spontaneously for four bars before the next takes over, picking up the tune and creating a cohesive whole from individual contributions.

It’s the ultimate in musical collaboration, created off the cuff, rich in detail and gone forever as soon as every note is played, none of it pre-written or intended to be played ever again.

To the casual observer, trading fours looks a lot like musical instruments engaged in conversation – individual viewpoints and expression, in sync thanks to similar ideas, constructed with no forethought beyond a shared concept and maintained simply through the experience and knowledge of having a vocabulary and putting it to use on the fly to respond to new inputs.

Such an idea also struck Charles Limb of Johns Hopkins University School of Medicine’s department of Otolaryngology Head and Neck Surgery. He led a study putting jazz pianists into fMRI machines, giving them special keyboards they could see in mirrors and scanning their brains as they played together.

The results were fascinating. Just like it appears, trading fours seems to be just another kind of conversation. While playing, brain areas partially responsible for processing phrases and sentences lit up, just like our brains do when we’re talking.

“Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language,” says Limb. “But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of spoken language.”

To Dr James Giordano, Professor of Neurology at Georgetown University Medical Center, Washington, DC, the findings makes sense. “A growing body of research indicates neurological predispositions to be sensitive to types of sounds, and that both music and language involve some common pathways and networks in the brain,” he says. The evidence, he believes, suggests that elements common to both speech and music like the capacity to sense intonation, melodics and rhythm sets the stage for the brain to extract context using language.

In other words, adds Augusto A Miravalle, MD, improvisational jazz conversations take root in the brain as a language. A neurology professor at University of Colorado School of Medicine, he explains the regions responsible. “[It’s] characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language.”

Communication architecture?

What does all this mean? We tend to think of language as a fairly elemental property of human brains, especially since the pioneering work of linguist Noam Chomsky that popularized the idea of the brain hard wired for speech.

But what if Limb and his colleagues’ jazz experiment reveals something deeper? Maybe there’s an immutable central framework for communication that spoken language is only one application of, like a computer operating system that can run plenty of programs using the same underlying hardware.

A perhaps even more interesting finding in Limb’s study was that the musical ‘conversation’ dampened activity in other brain areas. Even though phrase and sentence processing was on alert, areas linked to the processing of spoken language in particular were quieted.

That sounds like a contradiction, but where speech is semantic, music is more syntactic, making it the key to the communication among the musicians as they traded fours. The logical conclusion then seems to be that playing music back and forth uses whatever communication architecture we’re dealing with but uses it quite differently than speech.

It’s another thesis Professor Miravalle agrees with. “Music and language are both complex systems of auditory communication that rely upon an ordered sequence of sounds to convey meaning,” he says. “This basic commonality between music and language raises the possibility of a shared network of neural structures that subserve these generative, combinatorial features.”

Of course, if there is an application-independent communications framework, it’s as mysterious as many other of the brain functions we know. Dr Giordano reminds us that those applications need not be linked in any functional way. Just because you can speak, for instance, doesn’t mean you can appreciate (in the sensory as well as the aesthetic meaning of the word) music.

“People who can’t hear certainly develop and possess full capacity to engage and comprehend the meaning and intent of visual and gestural communication,” he says. “In fact they often possess considerable abilities to intuit nuanced cognitive, emotional and situational meanings through perception and transmittal of rather subtle visual cues.”

However it works, the argument that we do come with a communication framework built in seems strong – it’s only in the absence of working input mechanisms (such as in the deaf) that they aren’t developed.

Composer and researcher Kenneth K Guilmartin is the founder and director of Music Together, an organization that uses music as part of early childhood learning and development, and he’s seen the communications potential that predates speech built into every brain through his work.

“Just as we’re all born with the potential to speak our native language, we’re all born with enough music ability to learn to sing in tune and move in time, as long as parents and other primary caregivers provide an adequate music environment during early childhood,” he says.

Is language who we are?

But here’s an even more intriguing possibility. According to Giordano, the brain regions that are involved in conversation also help form cognitive concepts about the meanings of objects and situations, and how actions affect the self and others.

So might a native communications substrate – to the extent it exists – also be making very real impacts not just on our ability to absorb the world around us but decide how we feel about it? As those regions are firing to assimilate and sort the sensory cues of speech or music, might they be automatically forming them into larger ideas (and ideals) about what we think once we’ve absorbed them? Might they be one of the bases of constructs like beliefs and morals?

We don’t need careful experimentation to prove how music can affect mood and emotional response, after all. Even without lyrics that make objective sense it exerts any number of effects on us. “Music can foster emotions and thoughts of reference to evoke significant meaning,” Giordano says. “Think of the effect of theme songs and anthems. Clearly different types patterns of music – and language – can evoke very distinct cognitive and emotional influences from pleasing or rousing to repulsive or calming.”

From there, given knowledge about the baseline communications operating system in the brain, how can we use knowledge about it to our advantage? One method is using exposure to music to engage neurological networks involved in communication and motor skills, something many studies have supported. It gives us the chance to use music in medicine, education and media to quite targeted effect.

Giordano says music based therapy has shown to be of benefit in helping stroke and head injury victims recover, for example. “We’re also beginning to understand how music might be used to optimize learning and performance – not only of language, but of a host of other skills and types of knowledge,” he adds. “While not necessarily a new concept, it’s being viewed again in light of recent neuroscientific evidence.”