Selective hearing is term usually gets tossed about as a pejorative, an insult. When your mother used to accuse you of having “selective hearing,” she meant that you paid attention to the part about chocolate cake for dessert and (perhaps intentionally) ignored the bit about cleaning your room.
But it turns out that selective hearing is quite the skill, an impressive linguistic feat conducted by teamwork between your brain and ears.
Hearing in a crowd
This scenario probably feels familiar: you’ve had a long day at work, but your friends all insist on meeting up for dinner. They choose the noisiest restaurant (because it’s trendy and the deep-fried cauliflower is delicious). And you spend an hour and a half straining your ears, trying to follow the conversation.
But it’s tough, and it’s taxing. And it’s a sign of hearing loss.
You think, maybe the restaurant was just too noisy. But… everyone else seemed to be having a fine go of it. You seemed like the only one experiencing trouble. Which gets you thinking: what is it about the crowded room–the cacophony of voices all struggling to be heard–that throws hearing-impaired ears for a loop? Why is it that hearing well in a crowd is so quick to go? Scientists have started to uncover the answer, and it all starts with selective hearing.
How does selective hearing work?
The scientific term for what we’re loosely calling selective hearing is “hierarchical encoding,” and it doesn’t take place in your ears at all. This process almost entirely takes place in your brain. At least, that’s according to a study performed by a team at Columbia University.
Scientists have known for some time that human ears essentially work as a funnel: they collect all the signals and then send the raw data to your brain. That’s where the heavy lifting happens–specifically the auditory cortex. That’s the part of your gray matter that handles all those signals, interpreting sensations of moving air into recognizable sounds.
Because of extensive research with CT and MRI scans, scientists have known for years that the auditory cortex plays a significant role in hearing, but they were stumped when it came to what those processes actually look like. Thanks to some novel research methods involving participants with epilepsy, scientists at Columbia were able to learn more about how the auditory cortex works when it comes to picking out voices in a crowd.
The hearing hierarchy
And here’s what these intrepid scientists discovered: there are two parts of the auditory cortex that do the most work in helping you key in on specific voices. They’re what enables you to sort and amplify specific voices in noisy environments.
- Heschl’s gyrus (HG): This is the part of the auditory cortex that handles the first stage of the sorting process. Researchers discovered that the Heschl’s gyrus (we’re just going to call it HG from now on) was breaking down each individual voice–separating them via individual identities.
- Superior temporal gyrus (STG): The differentiated voices move from the HG to the STG, and it’s at this point that your brain starts to make some value determinations. The superior temporal gyrus figures out which voices you want to focus on and which can be safely moved to the background.
When you start to suffer from a hearing impairment, it’s harder for your brain to differentiate voices because your ears are missing specific wavelengths of sound (high or low, depending on your hearing loss). Your brain isn’t provided with enough information to assign individual identities to each voice. As a result, it all blurs together (which makes conversations hard to follow).
New science = New algorithm
Hearing aids already have features that make it easier to hear in noisy environments. But now that we know what the basic process looks like, hearing aid manufacturers can incorporate more of those natural operations into their device algorithms. For example, hearing aids that do more to differentiate voices can help out the Heschl’s gyrus a little bit–leading to a greater ability for you to understand what your coworkers are saying in that noisy restaurant.
The more we learn about how the brain works–especially in conjunction with the ears–the better new technology will be able to mimic what happens in nature. And that can lead to better hearing outcomes. That way, you can focus a little less on straining to hear and a little more on enjoying that deep-fried cauliflower.