Advertisement

How does the brain solve the cocktail party problem

How does the brain solve the cocktail party problem Did you know that there is a good chance that you will be unable to hold a conversation in a noisy cafe as you grow old!

I am trying to understand why this happens using monkeys. Please refer to my paper [1].

[1] Felix Schneider*, Pradeep Dheerendra*, Fabien Balezeau, M Ortiz-Rios, Yukiko Kikuchi, Chris I Petkov, Alexander Thiele, Timothy D Griffiths, "Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey", Nature Scientific Reports, vol. 8, pp. 17948, Dec 2018.

Link:

My work impacts more than half the world’s population who go on to develop age-related hearing loss. They will develop difficulty understanding speech amidst background noise, like when hearing someone speak in a noisy cafĂ© or a party. This is colloquially known as the "cocktail party problem". Even computers are unable to accomplish this feat which any human or animal can! However, we do not yet fully understand as to how the brain solves this problem.

For that, we need to record from neurons, i.e. single cells in the brain. Since systematic single neuron recordings are not suitable to perform in humans, we need to use animals in this research. Monkeys are best suited as animal models of human auditory perception due to their similar auditory abilities and similar organization of their auditory brain as humans. However, before we can generalize the findings from monkeys to humans, we need to establish that monkeys utilize similar brain regions as humans to separate overlapping sounds.

To compare the brain activation found in monkeys with humans, I need to employ sounds that are equally relevant to both species. In this regard, I cannot use human speech or monkey calls. So we have created a new kind of artificial sound where an auditory object is made of temporally coherent tones (shown in orange) that repeat in time and overlaps with 'background' made of randomly varying tones (shown in black). Extraction of this auditory object requires integration across both time and frequency similar to extraction of a voice in a noisy party. Thus, these artificial sounds simulate the challenges faced in real-world listening yet are devoid of semantic confounds.

Using these artificial sounds and employing non-invasive functional magnetic resonance imaging I showed that monkeys use similar brain regions as humans to separate overlapping sounds. My study is the first to show such evidence in any animal. This has now paved the way for recording from single cells in the monkey brain which will enable us to understand how the brain solves this problem.

This is crucial to understand the factors that affect speech perception amidst noise, and why only some people develop problems in holding conversation in noisy environments but others don't. This might also help us to design better hearing aids in future!

problem

Post a Comment

0 Comments