Speech is the most effective and common method we use to connect with our fellow humanity, but what about when we don’t have it? Drew Turney looks into how – and why – we communicate when we don’t have the words.
Making social connections with other people is not only a pivotal part of the human condition, it’s the reason we’ve been so successful as a species even though we had duller teeth, slower speed and less strength than many of the plains animals we competed against when we first came down from the trees.
The drive to connect and communicate is in fact so strong it exists even in the absence of the communication methods most of us know – it’s a drive we can’t hold in any more than the instinct to eat or sleep.
When you’re an adult who’s been raised in a healthy environment and aren’t afflicted with the mental illnesses or developmental disorders that can affect communication, understanding others by speaking and listening so seamless and easy we can’t imagine not having it at our disposal.
But there are plenty of examples where we can’t (or don’t need) to use speech. Aspergers Syndrome and autism are characterised by problems absorbing and synthesising social cues in communication. Any parent will tell you their pre-speech baby makes their wishes very plainly known. The hearing impaired can talk very effectively using sign language.
When homo sapiens first arose 200-300,000 years ago we didn’t even have language. Somehow the complexity and richness of languages today had to evolve from a series of vocalisations that meant anything from ‘predator over hill’ to ‘I want to mate with you’.
Mating itself remains one of the areas where communicating without words is expressed with the most finesse. Entwined with an established partner, we employ all kinds of non-verbal vocalisations and movements that communicate very specific pleasures and desires, actions that only make sense with that person and in that very specific context.
We also can’t discount how much social signalling is done using body language in any kind of social construct. Psychology professor Albert Mehrabian coined the famous 7-38-55 equation, referring to the proportions of communication that are made up of words, tone of voice and body language.
Though it’s been disputed and misinterpreted plenty in the years since Mehrabian came up with it, there’s no doubt to anyone who’s ever held a conversation how important our stance, vocal range, eye contact and any social touching (handshake, shoulder pat, etc) plays a part in getting a message across.
The emotional lynchpin
The reason so many communication systems work without speaking is because the drive to socially connect and the reason communication always finds a way have the same basis – emotion.
Emotion is the filter from which we process information,” says Dr Froswa Booker-Drew, a PhD in Leadership and Change from Antioch University with a focus on relational leadership and social capital. “Our responses are based on how we feel and how we interpret information. Emotional recognition is the most basic and fundamental form of affirmation. It serves as the foundation of how we see ourselves and how we connect to others.
“We live in a culture of emotion. Our ideas and experiences are connected to our influences and the way we see and experience the world. These experiences invoke emotional responses.”
In the profession of speech therapy and treatments for speech disorders, the key is to look for the emotional intent in the individual. Erin Stauder, MS, CCC/SLP, is the executive director of Baltimore’s Hearing and Speech Agency, and she says that even when words aren’t present doctors, psychologists and speech pathologists can analyze and interpret communication intent.
“Before children have words they’re communicating emotions to their parents or caregivers through smiles, coos, screams or cuddles to express displeasure, discomfort, joy, happiness, love or disgust,” she says. “Communicating your emotion is that first step and parents know how their babies are feeling.”
And even though the underlying neural programming is complicated and mysterious, neuroscience agrees that emotion is the lens with which we experience all things, including social connections. “On some level, most communication involves a perceptual or intentional aspect that’s reliant upon our emotional evaluation and experience,” says Dr James Giordano, Professor of Neurology and Biochemistry at Georgetown University Medical Center, Washington, DC.
“That applies either in the moment or reflective of a prior event, descriptive of or intentional toward a future event. So it seems almost any form of interpretive or expressive cognition will involve an emotional component. It’ll evoke some self-referential feeling about what’s being communicated. Even if that feeling is ‘neutral’, it’s still experienced.”
Loss of signal
So how does the brain manage or compensate when emotional cues are absent or incomplete? The challenges of autism or other disorders are one thing, but as technology has gradually put more and more people within reach of our spheres of communication, it’s eroded more of the aspects that make communication effective.
We evolved to speak or articulate directly to one or a small group of people with all the information that words, body language, vocal inflection and gesture contains. A century ago the telephone let us talk to people who weren’t right there, but it stripped out all the body language we bring to bear. Every time you send an email, it’s also missing the vocal range and timbre of speech, delivering mere words and nothing else (say ‘yeah, I love sales calls during dinner’ in a sarcastic voice and your meaning will be clear. Write it in an email and that context is absent).
We’ve all done a social media update or sent a text that doesn’t come across how we intended. The reason is because our emotional state – and its translation into words – is obvious when we select the words to use, but when our message lacks the power of all those non-verbal communication tools, we forget how hard it can be to convey the meaning we intend.
It’s the same reason why, when we get an email that makes us angry, it’s a good idea to wait 24 hours to reply. In the heightened passions of the moment, we’re going to convey our emotional state using very different words than those we’d select when we’ve cooled off a bit.
“Wherever nonverbal markers aren’t available like in online communication, we should proceed with an extra level of caution when selecting words, organization and punctuation to make sure the message is received in the spirit it was delivered,” agrees Stauder (adding that we should keep such constraints in mind when we’re the receiver too).
But even in the absence of sound, body language and all the other elements of face to face communication, new tools to convey emotion find a way. Verbiage and syntax in emails or texts can of course be used skilfully to convey emotional tone, and James Giordano reminds us of all the informal communication frameworks technology offers.
“Think of the email message in all caps with multiple exclamation points or the difference perceived by using Times New Roman versus Jokerman fonts,” he says. “Simply put, we try to make sense of information we receive and employ our neurological communicative networks to interpret whatever information we have to establish ‘signals’ about the meaning of such input.”
Communication in the brain
We don’t need to be told by a neuroscientist that communication happens in the brain – even the subliminal and unconscious cues it sends the muscles to telegraph our mood through body language come from synaptic impulses that respond to our emotional state and our appreciate of the environment we’re in.
So are the means for non verbal communication a kind of innate mental property, like a lot of linguists think about the ability for language after Noam Chomsky’s Universal Grammar theories? Does the drive and ability to communicate simply develop in a particular brain region regardless of the method that’s used to covey it to the world outside – be it sign language, the happy gurgling of a baby or the soft hooting of a proto-human on the plains of Africa?
An Oxford University study from as far back as 2008 that looked for differences in the brains of deaf and hearing people saw only superficial results. ‘There is no indication that (for instance) regions that support auditory processing in hearing people are of smaller volume in deaf people,’ the authors said.
But as with everything neurological, the questions isn’t anywhere near so cut and dried. “While different sensory modalities engage distinct pathways and areas of the brain, the ability to communicate seems to involve temporo-frontal and limbic networks, regardless of the mode of perception and expression,” says James Giordano.
Sensory inputs that are first processed by other areas of the cortex like vision in the occipital lobe, hearing in the temporal lobe and touch in the parietal and posterior frontal lobes can engage pathways to the temporo-frontal cortex that functions in the integrative cognitive and motor aspects of communication.
The network joins the temporal lobes of both the left and right hemispheres, and they seem to be involved in slightly different interpretive aspects of visual and auditory communication like expression and intonation. They function in concert with sub-cortical networks of the hippocampus and other limbic structures that are involved in memory and emotion.
But however the brain, mouth or any other platform to convey emotion does so, it seems the need to connect socially has not only been with us for longer than many of those methods, it’s been the driver behind all of them.
Sidebar – levels of connection
In considering a one-line email with no punctuation and poor grammar versus the unspoken language shared by lowers subsumed by passion, can we suppose that non-verbal communication exists along hierarchies of the depth or fidelity of information conveyed? Maybe simpler methods are recruited for the broadest of social constructs (like talking to many people with little common ground) and more finely tuned systems for more intimate bonds (like ‘in-jokes’ shared by close friends or spouses).
“Fundamentally, every situation comes with its own unique set of socially agreed-upon parameters,” says Erin Stauder. “A presentation given by a company executive is expected to have a written message on a screen and the delivery of a verbal message together with body language – nonverbal cues that help the audience interpret the message. In that instance, different modalities are used to effectively communicate.
But she thinks ‘continuum’ might be a better word than ‘hierarchy’ simply because of the complexity and richness of non-verbal communication methods we use in different situations. “If you consider an email versus a sexual encounter, that hierarchy is related to the role of emotions and the senses – email needs limited use of the senses whereas sex uses a variety of sensations that are being used to communicate.”