By Dr. Robert Mario
Like the mute button on the TV remote control, our brains filter out unwanted noise so we can focus on what we’re listening to. But when it comes to our own speech, a new brain study from the University of California, Berkeley shows that instead of one mute button, we have a network of volume settings that can selectively silence and amplify the sounds we make and hear.
Activity in the auditory cortex when we speak and listen is amplified in some regions of the brain and muted in others. Neuroscientists tracked the electrical signals emitted from the brains of hospitalized epilepsy patients. They discovered that neurons in one part of the patient’s hearing mechanism were dimmed when they talked, while neurons in other parts lit up. These are new clues about how we hear ourselves above the noise of our surroundings and monitor what we say.
“We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech,” said Adeen Flinker, lead author of the study.
“We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak,” Flinker added. “Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others.”
”Whether it’s learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment,” Flinker said. He noted that people with schizophrenia have trouble distinguishing their own internal voices from the voices of others, suggesting that they may lack this selective auditory mechanism.
The auditory cortex is a region of the brain’s temporal lobe that deals with sound. In hearing, the human ear converts vibrations into electrical signals that are sent to relay stations in the brain’s auditory cortex where they are refined and processed. Language is mostly processed in the left hemisphere of the brain. In the study, researchers examined the electrical activity in the healthy brain tissue of patients who were being treated for seizures. The patients had volunteered to help out in the experiment during lulls in their treatment, as electrodes had already been implanted over their auditory cortices to track the focal points of their seizures.
In comparing the activity of electrical signals discharged during speaking and hearing, researchers found that some regions of the auditory cortex showed less activity during speech, while others showed the same or higher levels.
“This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say,” Flinker said.
Dr. Robert Mario, PhD, BC-HIS, is the director of Mario Hearing and Tinnitus Clinics, with locations in West Roxbury, Cambridge, Mansfield and Melrose. He can be reached at 781-979-0800 or visit www.mariohearingclinics.com. Archives of articles from previous issues can be read at www.fiftyplusadvocate.com.