Does the human voice have more meaning to others than words do? If not, then why do humans respond differently to spoken language than to the written word? A report in the news offers new information that might help explain the difference, although some may have missed the significance of this news.
You Bug Me. Now Science Explains Why on Morning Edition 17 May 2011, tells us why some are so annoyed with cell phone users when trapped in the same room, vehicle, or outdoor space as the user. It describes research which suggests how the brain might work to understand words in not so optimum an environment for hearing, how what is heard in a conversation affects the mood. Since speech can be more effective than a written essay with the same words in it, some of the unconscious content might include more than an emotion. This post includes both straight reporting and the author’s observations and opinions on this research.
Updated: June 6, 2011
Science can explain why certain things are irritating (Annoying: The Science of What Bugs Us, say authors Flora Lichtman (NPR Science Friday) and Joe Palca (NPR reporter). They each discussed what was annoying to them: certain sound frequencies, delays, toenail clipping. Renee Montagne says she finds the sound of someone talking on the cell phone is annoying. Flora Lichtman refers to scientific studies about “verbal shadowing” where a person listens to someone talking, and repeats what they hear as soon as they hear it. It appears that people can do this with little time lag from what they just heard.
The problem with a cell phone conversation is that it is very hard to predict what the cell phone user is going to say next, because the non-participant observer cannot hear both sides of the conversation. Flora Lichtman mentions a study by a grad student at Cornell, Lauren Emberson, who recorded a cell phone conversation where the cell phone user who was visible could be understood, and the sound, made by the person to whom the user was speaking, was audible but not comprehensible (words garbled). The annoying effect went away. Flora Lichtman says the annoying part is not just about the sound intruding into a listener’s space, but it seems to be something about the speech itself.
What I Posted at NPR
The description of the annoying cell phone caller was particularly intriguing. Flora Lichtman concludes from a study done by a Cornell University grad student where a person had to listen to a cell phone conversation where both speakers were heard by the listener and compared with a situation where only the cell phone user nearby could be heard, but the second speaker had his speech garbled, that there seems to be something about the speech by the other caller who could not be heard in the control situation that reduced annoyance. This is yet another case where there seems to be content in the speech that is unrelated to intelligible words.
A person might get information, e.g. concepts, from someone’s voice, even if they speak no intelligible language. That information would be critical to the listener’s own nervous systems to reveal the state of the speaker’s nervous system, to pick up intent, or need. As social beings, it is clearly understandable that humans would use this information. Even nonsocial animals will need to understand the intent of another species or conspecific, especially if they could become harmed.
An Expanded Analysis
So much of this research suggests that the voice carries more information than what the words convey. It is reasonable to suspect that all animals can receive from the sound of the voice some data about the physical well-being of the animal making the sound. Anthropologists studied this aspect in humans and found that both men and women accurately predicted upper body strength from a person’s voice, regardless of the language, either known or unknown. The voice may convey not only physical aspects of the person but also emotional status as well. It stands to reason that an animal could be able to predict the intent of another animal, especially if that other animal could pose some harm, e.g. predator, or threatening conspecific. This should be true because most animals will be hiding from such threats, and in places where the sight of the threatening animal is not possible.
It is also reasonable to suspect that most babies can tell intent of other people when they hear voices. It might explain the tendency of very young children to lie very quietly or crouch behind something when presented with a strange person. To the adult, who is listening to only the words of this person, there is nothing to fear. However, that person might be thinking of something which the child misinterprets as dangerous or uncomfortable to themselves. The tendency to get information from voice content might also explain why children respond this way with some people and not others. Intent in its full complexity, however, would have to be learned over time in the child. The child’s first instinct is for self-protection until he/she learns how to interpret the signals he/she gets. It would, however, be adaptive for all very young children to respond similarly to the same bad intent, even if it might be misinterpreted.
There are possible scenarios to explain this concept. Communication of intent can be deceitful, although not intentionally so. For example, an adult is walking up to the open door, where the mother is greeting her visitor and the visitor responds. At the same time, the visitor might be feeling really annoyed at the dog next door who is barking, and “wants to kill that dog.” The visitor’s voice would carry that intent, although not the words, and no one hearing the words would ever conclude that the visitor meant harm at all to anyone. The child, not realizing the unverbalized intent was not meant literally, would become extremely shy, searching for protection from this visitor, even though later, the child learns that the visitor is always nice to the child.
If there is more than verbal content to the spoken words of a person, one would expect that students would learn more from a course where they came to hear the lecture of the professor than those students who stayed away, and only read the words of the transcript of the lecture. Indeed, there have been studies of nursing students in a physiology course in Iraq where they had the choice of attending lecture or not and having access to the unspoken lecture material at another time. Those who stayed away from class generally earned about 5 points less on average on tests per absence, than those who attended class [can’t find the article right now, but Hammen & Kelland (1994), in a similar study of second year college students, found a 2 point drop in scores per absence but they only measured attendance and did not say anything about what learning materials other than textbooks were made available to the students].
When humans speak or make a sound, not only do the vocal cords vibrate, but so do other structures in the body, including within the larynx, as well as sinuses within the bones of the head, jaw, lungs and entire thoracic cavity. Congestion in the nose, throat and ear passage when a person is sick with a cold still allows the listener to hear within a head the change in vibration. A person can hear the rattle in his/her lungs when there is pulmonary edema. The sound travels up the bronchi, trachea and all the way up to the ears, from within the head, not by sound waves striking the tympanic membrane in the external auditory canal side. The vibration can be felt as it is transported to the cochlea via the Eustachian tube. There is a clear change in sound frequencies that can be sensed.
There is a science of Bioacoustics, now called Vocal Profiling (probably changed in name to avoid confusion with the science of measurement of sounds in the wild), invented by Sharry Edwards. She suggests that the voice carries sounds that reflect the status of internal organs. Most specifically it has been applied to vocal therapy, but has been used to detect changes in frequencies that correlate with particular organ dysfunction elsewhere in the body. She is now studying Traumatic Brain Injury and using it to predict PTSD in soldiers. Vocal Profiling has given rise to computer programming (VoiceBio) to detect organ dysfunction.
There are numerous examples of training the brain to detect sound. However, there is also the likelihood that the brain can learn to interpret sound differently, depending upon the source, and in what medium the sound waves travel. A personal example can help explain these statements.
When using mindfulness techniques to train the brain to locate toxins in my body, or to repair tissue, I found that the brain is best trained when speaking aloud. I suspect that sound traveling into the ear and the signals transmitted from the hair cells of the cochlea to various parts of the brain, carry more information than just the words that I speak. As I think about the words I speak, I generate patterns of thought that translate into specific circuits of signal transmission that are associated now with the sounds of the words I speak, as well as other information about the status of my nervous system. The brain categorizes thoughts to make it easier to “run associations” among categories. This is the basis of learning and processing of information. The additional information carried by the sound of the voice coming through different pathways than the the ones used by internal thoughts, helps the brain to categorize the information much more thoroughly, allowing it to retrieve it unconsciously much more efficiently than if the words had just been spoken silently to oneself.
So word recognition may only be part of the way a brain understands what a person is saying. The content of the voice may include all the physical characteristics of sound wave propagation emanating from soft tissue that is healthy or damaged. Even more profound, a person may impart to that soft tissue, properties which tell others what he/she is thinking about, whether spoken of in the chosen words, or in the quality of the voice.
Check out Hearing Ability is Related to Emotions for more details about sound and the brain.
Hammen, C. S. & Kelland, J. L. (1994). Attendance and grades in a human physiology course. Advances in Physiology Education, 267, S105-S108.
Sell, A., Bryant, G. A., Cosmides, L., Tooby, J., Sznycer, D., von Rueden, C., . . . Gurven, M. (2010). Adaptations in humans for assessing physical strength from the voice. Proceedings of the Royal Society B, 277(1699), 3509-3518.
Keep up with new posts I make by subscribing to this blog: go to the top and click on “Subscribe” in the gray WordPress Choice Bar (if you are already registered in WordPress.com and have logged in) or when you comment on this blog, click on the “notify” check boxes.