Hearing Ability is Related to Emotions

Can listening to music prevent hearing loss? Or is the study a bit too eager to point to a causal effect when no experiment was done?

Comment on “How Music May Help Ward Off Hearing Loss” on Morning Edition 22 Aug 2011: emotional trauma may explain better this type of hearing loss.

Human Left Ear, from David Benbennick at Wikimedia
Human Left Ear, from David Benbennick at Wikimedia

Summary

NPR reporter Patti Neighmond interviewed two researchers. Nina Kraus (Auditory Neuroscience Lab, Northwest University), has studied age-related hearing degradation, in particular that involved with the ability to distinguish particular sounds (“f” vs “th”) as well as bits of silence between words. The loss of these abilities along with memory losses lead to not being able to keep up with a conversation where people talk fast, because the listener is still trying to process the difference between “f” and “th” or trying to process “ashortfatlittlebody” vs “a short, fat, little body”. It is especially apparent in a crowded restaurant or where there is a lot of extraneous noise.    

Kraus noticed that people who played musical instruments had better hearing than those who had not played musical instruments.  Kraus suspects this result is because musicians pay attention to the subtle differences in sound. Musical training helps one to distinguish between high and low frequencies, different tones, and monitor the timing of sound production better than no musical training.

When the abilities of 45-65 year-olds to repeat sentences stated to them aloud in various situations, 40% more musicians than non-musicians were able to distinguish sentences from background noise (quiet background, moderate background chatter, and loud background babble). Musicians also remembered the sentences better, as well. Conclusion: musicians were more capable of following a conversation than non-musicians at any age.

There is no evidence that starting to play a musical instrument will prevent this kind of hearing loss. However, animal studies suggest that “age-related” hearing loss can be reversed. A study of young and old rats who were trained to detect a distinct but subtle sound difference showed that, despite the fact that young rats could learn more quickly than the older ones, over a period of one month, being trained every day, all rats were able to improve their ability to detect the higher-pitched beep sound when played. Scientists also found anatomical changes in neurons in the auditory cortex with training.

 

My Comment Posted at NPR

The research reported in this clip is an example of work which recognizes that hearing loss is a complex event, governed by several different aspects of our biology.  However, it makes it “sound” like the only thing affecting what we hear is the mechanical apparatus for sound conduction or the actual nervous transmission of sound quality to the brain–not how the brain processes sound from the simple input of frequency, loudness, complexity, timing variables, social circumstances, type of source (animal, human, someone we know or don’t know), or the state of our emotions/physiology at the time we hear the sound. Furthermore, the fact that this type of hearing loss is “age-related” doesn’t necessarily support only a mechanical or nerve transmission breakdown with time.

The emphasis on learning to filter extraneous noise and to concentrate on one sound to improve hearing is only based upon the assumption that, with age, we have more difficulty being able to concentrate and/or have memory loss. That may be a valid observation, but as yet, untested

No researcher has taken a young person with excellent hearing and made them prematurely “aged” and found that person to have suffered from the symptoms described in this report. No researcher has taken a young person with excellent concentration and memory skills, made them prematurely “aged” and found that person to have lost some of these capabilities. There may be other reasons for losses in hearing and memory with age, since not every elderly person suffers from these losses. There is no inevitability to this loss. Strong associations do not mean cause and effect, and this principle applies to other concepts in this report, as well.

No experimental test has been done on humans showing that learning to play a musical instrument well or listening to music will “repair” what was damaged in the brain–only in rats. Rats have extraordinary hearing for certain wavelengths of sound. Rats have better hearing than people do for all frequencies.

Toxic substances in the lab will cause hearing loss in “aged” rats (over 5 years old is aged) simply because they have lived for that long, exposed to toxins.  Labs are notorious for no attempt to control for levels of toxins in them, since they rely heavily on extremely strong chemicals to clean cages. These chemicals are not environmentally safe, and kill all microbes, even the beneficial ones. The only ecosystem researchers want is a fake one.

Most importantly, although words are made up of sounds, a sound is not a word. The type of human hearing loss they describe in this report involves a higher level analysis of sound as it relates to our social human life, and thus, will involve emotional input more heavily than just learning to pick out a particular sound.  Once sound reaches the level of making up a word, it requires a lot of other brain activity to step in, and all of it depends heavily upon emotional input. 

I suggest that emotional traumas may have a bigger effect on our hearing capabilities, than the quality of a sound associated with picking out words in a conversation. Furthermore, there is a lot of unconscious hearing that occurs that most of us would not consider as contributing to losses in hearing capability. (See my blog posting on “Hearing Ability is Related to Emotions” at https://marthalhyde.wordpress.com/2011/08/29/hearing-ability-is-related-to-emotions (I am still adding pics and more text to this, since this article could not be finished in a week).

At that site mentioned in the previous posting you will find more discussion on the hearing circuitry and the associations we make with certain words or sounds would explain the types of hearing losses that can occur with age, and my blog post on “Using MRT: Removing Toxins and Emotional Trauma” for discussion about our unconscious senses).”

 

Anatomy Affects What Is Heard

Levels of Sound Analysis

In order to understand the complexity of hearing and all of its implications, look at the pathway that sound takes to enter the brain and what happens to that signal inside the brain, afterward.  The anatomy shows how sound gets translated into a certain level of complexity, e.g., speech, as opposed to a beeping sound. This is done by the brain filtering the electrical signals.

There are many filters in the brain. They filter sound according to location and quality:

  1. where inside the cochlea they come from
  2. where outside of the body they come from
  3. the type of source (a person or something else)
  4. loudness
  5. tone (frequency)
  6. complexity of sound

All of these filters are placed at various “levels” in the brain and involve getting other circuits involved at each level. The entire picture is presented here so that readers can understand that the concept of how sound is interpreted at each level can help inform many other concepts involving higher level, conscious acts, such as thinking, learning, analysis of something read or heard, categorizing the world around the listener, or processing memory. The reader can then grasp how these higher level, conscious, acts get programmed by the anatomy.

Most psychologists think that processing of sound will take place only at the neocortical (or conscious perception) level. However, as outlined below, processing takes place at all levels in the brain, and mostly in the unconscious part. As such, it has been difficult to understand what quality of the sound heard is unconsciously processed. This has led to an inability, at times, to even notice that someone spoke, even though others may have witnessed this fact.

Since memory recall involves the recovery of concepts that a person used, even though the exact wording or sequences of sentences used may not be recalled, it stands to reason that a lot of the content heard is not consciously perceived. In the anatomical view presented below, included are possible functions of the centers. Some have been experimentally tested. Others are centers hypothesized as necessary.

Unlike most articles on neuroanatomy, terms for major parts of the brain that represent the embryonic forerunners are used as an orientation point, simply because the circuitous routes of nerve pathways that resulted from folding up of the embryonic neural tube forerunner of the Central Nervous System (CNS) can be confusing. Even though these pathways never formed when the neural tube was first formed, but only after some folding up of the brain end of the tube had occurred, for simplicity’s sake, the stage of folding that the brain was in when these pathways had been fully completed is not shown.

Unlike most articles on neuroanatomy, terms for major parts of the brain that represent the embryonic forerunners are used as an orientation point, simply because the circuitous routes of nerve pathways that resulted from folding up of the embryonic neural tube forerunner of the Central Nervous System (CNS) can be confusing. Even though these pathways never formed when the neural tube was first formed, but only after some folding up of the brain end of the tube had occurred, for simplicity’s sake, the stage of folding that the brain was in when these pathways had been fully completed is not shown.

Embryonic Brain, before it folds up into what we recognize today, from Wikimedia
Fig. 1. Embryonic Brain, before it folds up into adult brain, from Wikimedia

Many of these figures are not completed yet so flow charts will be shown, using the primitive embryonic major parts of the brain, as depicted in the Fig. 1 above.

  1. Telencephalon (becomes the neocortex, and subcortical nuclei)
  2. Diencephalon (becomes the thalamus [and subthalamus], epithalamus [pineal body et al], hypothalamus [including pituitary])
  3. Mesencephalon (or midbrain, also contributes to Pons)
  4. Metencephalon (becomes the rostral [or anterior] medulla, most of Pons, and cerebellum)
  5. Myelencephalon (becomes the caudal [posterior] medulla)

Figures where the nuclei are seen in a typical adult brain will be mentioned below.

Fig. 2. Anatomy of the Ear
Fig. 3. Anatomy of the Cochlea

Lowest Level Circuitry

An excellent website for medical students studying the neuroanatomy of the auditory pathway can be found at Cue Flash: Glossary of Ascending Auditory Pathways.  The following description refers to Fig. 2 Anatomy of the Ear and Fig. 3 Anatomy of the Cochlea.

First sound enters the ear, hits the tympanic membrane, causes it to flutter, making the ear ossicles (malleus, then incus, then stapes) wobble. The stapes causes the membrane over the oval window of the cochlea to flutter, causing the fluid in the inner ear to set up a wave pattern. The movement of the fluid causes hair cells in the cochlea, on the basilar membrane of the inner ear, to move, and this causes their receptors to signal that a particular sound wavelength has caused this movement. The tympanic membrane and ear ossicles help to reduce the amplitude of was heard to one that won’t harm the delicate membranes of the cochlea.

This is how the mechanical apparatus of the ear translates sound into mechanical energy and then into electrochemical energy of neuron transmission. For an excellent website with beautiful figures and explanations on anatomy, function and pathology of the cochlea, see Promenade ‘round the Cochlea.

Cochlear Nuclei, dorsal view of brainstem, Neocortex & Cerebellum removed. Motor nuclei and fibers are red, sensory are blue. Although not pictured, they appear on both sides of the brain. Dark Blue are the three cochlear nuclei. modified from Gray's Anatomy, at Wikimedia (PD).
Fig. 4. Cranial Nerve Nuclei, Numbered I-XII, dorsal view of brainstem, neocortex & cerebellum removed. Motor nuclei and fibers are red, sensory are blue. Although not pictured, they appear on both sides of the brain. Dark Blue are the three cochlear nuclei (VIII). Modified from Gray’s Anatomy, at Wikimedia (PD).

The cochlear (auditory) nerve sends its signal into the brain directly to the cochlear nuclei in the mesencephalon (midbrain, Fig. 4) where only those cells representing the specific receptors in the cochlea will fire off. Cochlear nuclei cells will send their signals to the superior olivary nucleus in the metencephalon and to the medial geniculate nucleus (MGN), where the first level of filtration occurs.  Both nuclei sort the sound, but differently. Sound is sorted based upon:

  • the time of arrival of the sound (MGN)
  • the intensity of the sound to each ear (Superior Olivary Complex, for a great picture of this complex in a rat see bestpicsplace.com).

[Flow chart I]

Both nuclei send their signals to the inferior colliculi of the mesencephalon, as does the cochlear nucleus separately from the other two.

The Reticular Formation
Fig. 5. The Reticular Formation of the brainstem and Diencephalon.

The cochlear nerve also sends signals to the nuclei in the lateral lemniscus of the metencephalon which appears to sort the sound as to when it arrives, and how long the sound lasts.  Since the signals involve timing, these nuclei will send signals to the Reticular Formation (Fig. 5 , which enters into what is called the reticular or non-primary auditory pathway (Promenade ‘round the Cochlea), which holds cells that represent timing parameters (once called the Reticular Activating System because they are critical for achieving consciousness upon awakening) that will then send signals to pontine nuclei associated with the stapedius or tensor tympani muscles, which need to know when a sound begins or ends. They contract to dampen the signal if it is too loud, which could perforate the delicate oval window membrane in the cochlea or the tympanic membrane.

Timing signals are also important for parts of the brain that handle physiological responses  to the sounds, e.g. threatening vs non-threatening sounds, as well as to centers involved with allowing other planned programming in the brain to occur (e.g. Red Nucleus in the mesencephalon, and thalamic nuclei involved with regulating activity in the pons–see Behavioral and Emotional Responses to Terror for more about the programming done by the brain that will involve emotion).

Fig. 6. Inferior Colliculi. Three regions of layers can be seen here, separated by yellow and blue lines. Bottom of colliculus has black line. (Modified from brainmaps.org)

Higher Level Circuitry

The inferior colliculus (modified in Fig. 6 above) is layered according to frequency of tone. It also has “columns” of cells that interact with each other across layers, each column assigned to a particular place where its axons will be sent in the auditory cortex (azimuth, or angle the sound source is to the horizontal plane).  Each layer in the inferior colliculus maps directly from the position the Inner hair cells take on the basilar membrane of the Organ of Corti, as does the cochlear nuclei cells which send axons to the inferior colliculus.

The topmost layers are associated with loudness level, the next layers are associated with how the sound relates to something a person can recognize, e.g. the “ah” sound in the word “father”/  “Jan” (German)/”dias” (Spanish)/”ça” (French), or the sharp sound of a single bark of a dog. To help sort these sounds, these layers should require the first input from outside the mesencephalon, from the auditory cortex where these sounds are probably stored in memory (no one has traced this input, however). The direction of the origin of the sound is determined by activity in the bottom-most layers, where azimuth is most important and both sides of the body are factored into the filter. Each of the inferior colliculi from each side of the brain send axons to these layers of the other side’s colliculus.

 

Conscious Hearing in the Auditory Cortex of the Temporal Lobe

The auditory cortex detects whether the sound belongs to a word or not. If it does, it assigns a word to the sounds coming to it. Individual sounds are combined to make a syllable, here. Syllables are also assembled here into words.

The following interpretation won’t be found in the published literature on hearing and the brain. Contrary to what a lot of researchers say, the actual meaning of the word may not be stored in the auditory portion of the neocortex (temporal lobe), since the meaning is essentially a concept or group of concepts that comes from the part of the brain that evolved before the written or spoken word evolved.

Essential vocabulary words like “home”, “safety”, “mother”, “food”, “warmth” all have basic meanings to any vertebrate, although they do not associate a particular sound with those concepts. These concepts are probably represented by cells in the brainstem (mesencephalon and medulla) because they are near where basic physiological responses by the body must be regulated. The brainstem is found in all vertebrates, but only mammals have a neocortex. The animal needs to know that when they reach “home” (nest, burrow, territory) they are “safe”, and the correspondingly correct physiological changes can be made simultaneously with the animal’s arrival. Putting concept storage in the brainstem, therefore, makes sense.

More complex concepts, such as a technical term like “stomach” are actually made up of a lot of simpler concepts, like “inside the body”, “food”, “hold”, and are strongly associated with visual images that help the person  learn the word. The visual images are stored in the visual cortex of the occipital lobe. Most concepts have visual images associated with them, at least in humans. In a rat, for instance, most concepts are more strongly associated with smells, and the cells representing the smell of their own bodies (therefore, the smell of their own nest) are found in the olfactory cortex.

Obviously, all concepts have more than just visual images associated with them. In fact, all concepts will include associations with sounds, touch, smell, taste, pain, position sense, pressure, and all sensory input that is possible in the body, and are stored in their dedicated areas in the brain. Some of these areas include conscious brain locations in the neocortex (sound, smell, sight), but will also include unconscious brain locations in the mesencephalon and medulla (taste, sound, smell, proprioception, pressure, touch, pain, temperature).

[Insert Flow chart 2]

The auditory cortex has four association centers sitting next to it. Signals from the auditory cortex go into all four:

  1. Wernicke’s Area
  2. Broca’s Area
  3. An “A/V center” where both auditory and visual signals are sent, where thinking about the words heard is in terms of some visual aspect.
  4. A center that links the auditory cortex with the brainstem and basic physiological functions.

The first two, the most prominent are considered the secondary auditory cortex where speech and language are processed. The last link is in addition to the obvious link between words and concepts, and now addresses physiological responses to any kind of sound, whether originating from a person, another animal, or an object (e.g. dog barking, rain on the roof, explosion, mosquito dive-bombing the ears).

In all cases, these association centers link words and sounds with events from the past experience. A person cannot respond to a complete sound never heard before. This doesn’t stop them from developing a response, however, since all sounds will have elements never heard before. As outlined above, the routes in the brain that a sound took to reach the auditory cortex help the brain find all elements or qualities of a sound (e.g. a particular frequency) and develop associations with each individual element, all at the unconscious level.

If the idea of concepts being stored in the brainstem makes sense, then cells representing certain simple aspects of memory being stored there also makes sense. Other places in the brain are also associated with sound. Concept cells may be closely placed near “memory” cells. Concept cells may represent simple ideas, objects, relatives, friends, acquaintances, or places. These concept cells are strongly associated with, and send axons to, the language and speech centers of the auditory cortex, since, humans cannot function as social animals without language.

Hearing and the Emotions

As an example, since some people have problems with gas, others with vomiting, etc., the word “stomach” will also have other concepts associated with it.  In this way, any word will have different meanings or associations in different people. [For more on how associations are involved with thinking, see the section on “How Does the Unconscious Brain Think?” in MRT 1.0: Using MRT (Muscle Reflex Testing), and Surgical Approach to Sleep Apnea).

This example shows how every word also has an emotional association. Again, since emotions have profound effects on basic physiology, it only stands to reason that they are represented in the brainstem as well. Centers for emotion will have input to the very cells representing concepts (Emotional Representation in the Brain).

It is difficult to think of a word which has no associated emotion.  Even if an object by itself appears to not trigger an emotion, as a person thinks consciously about it (stream of consciousness), many images will come to mind that have nothing to do with the qualities of that object, e.g. a person, place, event or another object, each of which have emotions connected to them. [Using Muscle Reflex Testing (Applied Kinesiology). can help the user to check on the significance of the associations with each thought].

The connection can often be surprising, but it is of significance only to the user, because of something that happened in the past. The fact that this object is associated with something else that has an associated emotion will endow this object with that emotion.  The strength of that emotion may be very weak when compared to that linked to a person, place or thing, but it is still there and has an influence on the meaning of that object to the brain.

Finally, there is strong evidence for the association of emotion with all language and speech.  The person who has just been traumatized often will have lost the ability to speak for a while, either because he doesn’t want to or because he cannot say the words and clearly wants to speak (e.g. Types of PTSD, the section on Language and Speech).

Implications of The Reported Research

Now back to this report on how musicians remember sound. From the symptoms of inability to filter out the extraneous noise or to concentrate on specific words, or to remember what the conversation is about, one can suspect that something associated with speech recognition but not sound recognition has been damaged. That the symptoms appear with age, does not confirm a supposition that age causes it, since, clearly, older musicians often do not have the problem. Therefore, one can remove age from the discussion–or can one?

As people get older, they accumulate a lot of experience from stumbles and buffeting around caused by interactions with other people.  This causes traumatic emotional damage to accumulate over time, since a person, at least unconsciously, remembers every problem experienced from birth with each “insult” or “assault” (“everything ever experienced in life is stored in the brain,” Physiological Responses to Terror.  See also discussion about memories of memories in Special Case of Type I PTSD–Rejected Children, about the effects of emotionally traumatic events).

Each traumatic event can possibly damage neurons (due to the high voltage neurons involved with social interactions and emotions), without causing any consciousness of this damage. Unless healed, this damage will accumulate throughout life. Since human socialization depends so greatly on speech, it stands to reason that pathways involved with speech and understanding language will be affected by this accumulation of damage.

For these reasons, one cannot assume that listening to music will repair the mechanical apparatus or the neurons involved with sound transmission, evaluation or understanding language and speech, despite the evidence presented on musicians. Understanding language, as presented above, involves the emotional parts of the brain.  Musicians use a lot more of their emotional brain when it comes to playing an instrument than that part involved with analysis of sound. Indeed, voices may contain information, such as intent, that is not related to words or to emotional content of those words.

The extensive use of the emotional brain by a professional musician was not tested separately from the sound analysis capability in the research reported here. If the musician cannot translate the playing of an instrument into the emotional expression demanded by audiences, the musician will not last long as a performer, either for the audience or for him/herself. For this reason, elderly professional musicians are already pre-selected to not have as many traumatic experiences accumulated as would the non- or ex-musician (not considered in this research).

Most people know that success in the music world does not depend upon being able to have excellent hearing or the ability to translate music into emotional expression, but having these abilities certainly helps. There are many, extremely successful, pop songs out there that show these deficits very well. In some cases, they are successful because the musician can produce a catchy beat, or the background of the musician appeals to the listener, or the lyrics are appealing and can be heard better because the production crew has good hearing capabilities.

However, hearing loss associated with emotional trauma may be prevented. If a person repairs the brain as the emotional traumas are experienced (e.g., with psychotherapy or mind-body medicine methods), including the traumas not known or remembered consciously from the first years of life, reduces exposure to damaging toxins, and, of course, prevents exposure to very loud noise (music or otherwise), then a person might be able to prevent the kind of hearing loss described in this news report.

Mind-body medicine techniques can truly help to understand all of the auditory pathways, all of the centers involved in transmission, reception, processing and filtering of sound. Once the repairs are done, it is amazing how the social life may improve. Preserve the cochlea!

References

Parbery-Clark, A., Strait, D. L., Anderson, S., Hittner, E. & Kraus, N. (2011). Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise. PLoS ONE, 6(5), e18082. doi:10.1371/journal.pone.0018082. [Open Access].

de Villers-Sidani, E., Alzghoul, L., Zhou, X., Simpson, K. L.; Lin, R. C. S., Merzenich, M. M. (2010). Recovery of functional and structural age-related changes in the rat primary auditory cortex with operant training. Proceedings of the National Academy of Sciences, 107(31), 13900-13905. [Freely Available].

Purves, D., Augustine, G. J., Fitzpatrick, D. Hall, W. C.; LaMantia, A.-S., McNamara, J. O. & Williams, S. M. (2004).  Neuroscience, 3rd Ed. Sunderland, MA: Sinauer Associates, Inc., Publishers. Chapter 12: The Auditory System, pp. 283-314. [Freely Available].

Websites

Cue Flash: Glossary of Ascending Auditory Pathways
Promenade ‘round the Cochlea
Superior Olivary Complex
BrainMind.com: great place to read about almost anything about the brain
Brain and Behavior, Auditory, Chemical & Special Senses at University of Colorado, Lecture Outlines, with links to appropriate images
Sound Transduction: animation about how the inner ear works, with narration
Brain Maps

Keep up with new posts I make by subscribing to this blog: go to the top and click on “Subscribe” in the gray WordPress Choice Bar (if you are already registered in WordPress.com and have logged in) or when you comment on this blog, click on the “notify” check boxes.

© Copyright 2011-2015 by Martha L. Hyde and https://marthalhyde.wordpress.com.

 

 

Advertisements

Please Leave a Comment--you must be registered (free! No, you do not need to create a blog) To Keep Out Remote Comments By Those Who Do Not Visit This Page At ALL

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s