How do we hear one voice among many?

August 29, 2022

Ross Maddox teaches class
Each year, students in Ross Maddox's class put on electroencephalography (EEG) caps to get a glimpse of their own brain activity.

CAREER award winner Ross Maddox looks for clues in our brainstem

The ability of humans to listen and converse in noisy places like bustling city streets or crowded bars is remarkable but also mysterious. It is known, for example, that when sound waves are converted in the inner ear into electrical signals, those signals are conveyed and processed along an auditory brainstem that leads to the brain’s cortex, where auditory perception occurs.

Ross Maddox portraitBut scientists are still trying to understand how the signal processing along this intermediate “beautiful, but complicated network of connections” helps us focus our listening, says Ross Maddox, a University of Rochester biomedical engineer and neuroscientist.

For example, what is the purpose of downward connections that extend in the other direction, from the cortex back along the auditory brainstem? Could they play a role in helping us concentrate on one voice among many?

With support from a National Science Foundation Faculty Early Career Development (CAREER) award, Maddox will try to find answers to these and other questions. He will use new “encoding model” methods developed in his lab to measure auditory brainstem responses in human subjects who are engaged in goal-oriented, life-like tasks involving natural speech—not the rapid bursts of clicks and other stimuli used in traditional testing.

Researchers have long assumed that the downward connections of the auditory brainstem allow the cortex to better monitor and direct the listening process, especially in crowded settings.

“The idea is that somehow, when the cortex knows what you’re trying to listen to, the downward connections help modify how the upward connections are passing along the signals and extracting information from them,” Maddox says.

“So, if you’ve got two people talking—one person with a low voice and one person with a high voice—the cortex might be telling the brainstem ‘hey, really focus on encoding the higher voice.’”

But traditional research using clicks and other stimuli has resulted in decidedly mixed results, Maddox says. “Many studies have failed to find any evidence” of a downward connection directing the listening process. “Other studies have, but the evidence is usually very small, and often there are caveats to the interpretation of results.”

Maddox hopes his approach—simultaneously playing recordings of two storytellers for his subjects and asking them to pay attention to one rather than the other—will yield more decisive results.

Using electroencephalography (EEG), “we will measure the brainstem responses to each of those storytellers and see if the responses are bigger or smaller for the one the listeners are paying attention to, as opposed to the unattended one,” Maddox says. “That would be evidence that these downward connections are doing something to the way the sound is encoded.

“That’s the advantage of testing brainstem response to speech, rather than clicks,” he adds. “You can have people doing tasks where those downward connections might be really important.”

Maddox will also investigate whether being able to see the person who is talking, which is known to improve speech understanding under noisy conditions, alters the responses of the auditory brainstem.

“Understanding these top-down signals might help us understand why some people, even with normal hearing, struggle to understand what is being said to them when there is a lot of background noise,” Maddox says.

A third goal is to develop massive amounts of brain data from EEG recordings taken over several weeks. This would enable Maddox to train deep neural networks as a more powerful, next generation tool for mathematically describing the relationship between sounds and brain signals. “This approach will give researchers more powerful tools to analyze aspects of auditory processing and attention,” Maddox says.

All code and data will be publicly shared to “ease the path for others who wish to use them,” he adds. “The large dataset we envision is unlike any other existing public database and will spawn advances by other labs.”

Broadening the impact

Each year, Maddox teaches a course called Human Neurophysiological Measurement that exposes students to a broad range of non-invasive techniques used to study brain function. For example, the students get a unique opportunity to peer into their teacher’s brain—literally.

 Maddox puts on scrubs and climbs into an MRI machine at the University of Rochester Center for Advanced Brain Imaging and Neurophysiology (UR CABIN).  The students use the readings from Maddox’s brain for one of their lab exercises.

In another exercise, students put on EEG caps to get a glimpse of their own brain activity.

As part of the broader impact required of NSF CAREER projects, Maddox will have the students in his class develop their own demonstrations about ways EEG can be used to measure brain function.

The demonstrations will be incorporated into a short course Maddox will develop to teach students from Rochester’s East High School and from City College of New York as part of the Medical Center’s NEUROEAST and NEUROCITY programs. The programs encourage interest among students traditionally underrepresented in STEM fields.

“There’s just something really exciting of putting electrodes on your head and then watching the signals go up and down,” Maddox says.