Credit: UNSPLASH / CC0 public domain
When a group of friends meets in a bar or gathers for an intimate dinner, conversations can quickly multiply and mix, with different groups and pairs discussing each other.
Navigate in this bright word mixture – and focus on those who matter – is particularly difficult for people with a form of hearing loss. Animated conversations can become a melted mess of chatter, even if someone has hearing aids, which often find it difficult to filter the background noise. He is known as the “cocktail problem” – and researchers from the University of Boston think they could have a solution.
A new brain -inspired algorithm developed to BU could help hearing aids to settle interference and isolate single speakers in a crowd. During the tests, the researchers found that this could improve the accuracy of the recognition of the words of 40 percentage points compared to the current algorithms of hearing aids.
“We were extremely surprised and excited by the extent of performance improvement – it is quite rare to find such great improvements,” explains Kamal Sen, a developer of the algorithm and Associate Professor of BU College of Engineering of biomedical engineering.
The results are published in Communications engineering.
Some estimates put the number of Americans with hearing loss to almost 50 million; By 2050, around 2.5 billion people worldwide should have a form of hearing loss, according to the World Health Organization.
“The main complaint of people with hearing loss is that they find it difficult to communicate in noisy environments,” explains Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences Speech of Speech, Language and Hearing Sciences. “These environments are very common in daily life and they tend to be really important for people – think of table conversations, social gatherings, work meetings. Thus, solutions that can improve communication in noisy places have the potential of a huge impact.”
The best was a co-author of the study with Sen and BU Biomedical Engineering Ph.D. The candidate Alexander D. Boyd. As part of the research, they also tested the capacity of current hearing aid algorithms to cope with cocktail cacophony. Many hearing aids already include noise reduction algorithms and directional microphones (beam formators) designed to highlight the sounds from the front.
“We have decided to compare the standard industry algorithm which is currently in hearing aids,” said the senator that the existing algorithm “does not improve the performance at all; if anything, this slightly aggravates things. We now have data showing what is anecdotal manner with hearing aids.”
Sen has patented the new algorithm – known as Bossa, which means an algorithm of sound segregation with organic orientation – and hopes to connect with companies interested in the granting of licenses to technology. He says that with Apple, which jumped on the hearing aid market – its last AirPod Pro 2 headphones are announced as a hearing aid function of clinical quality – the breakthrough of the BU team is timely: “If hearing aid companies do not start to innovate quickly, they will be erased, because Apple and other startups enter the market.”
Successful separation sounds
Over the past 20 years, Sen has studied how the brain code and decodes sounds, looking for circuits involved in managing the cocktail effect. With researchers in his Natural Sounds & Neural Coding Laboratory, he traced how sound waves are treated at different stages of the hearing way, following their ear trip to the brain translation. A key mechanism: inhibitory neurons, brain cells that help remove certain unwanted sounds.
“You can consider it as a form of cancellation of internal noise,” he says. “If there is a sound to a particular location, these inhibitory neurons are activated.”
According to Sen, different neurons are set to different places and frequencies.
The brain approach is inspiration for the new algorithm, which uses space clues such as the volume and timing of a sound to adjust or adjust, sharpen or suffocate the words of a speaker if necessary.
“It is essentially a calculation model that imitates what the brain does,” explains Sen, which is affiliated with BU centers for neurophotonic and for systems neurosciences “and actually separates sound sources based on a sound entrance.”
A physicist who then trained in neuroscience, Sen said he came to drink in part because of the opportunity to work with the university hearing research center, where he is now a member of the faculty. He turned to clinical researchers to get help to test the algorithm.
“In the end, the only way to know if an advantage will result in the listener is via behavioral studies,” explains Best, an expert in spatial perception and hearing loss “, and this requires scientists and clinicians who include the target population.”
Formerly a researcher of the national acoustic laboratories of Australia, better helped to design a study by using a group of young adults suffering from hearing loss neurosensory, generally caused by genetic factors or infantile diseases. In a laboratory, the participants wore headphones that simulated people speaking of different places nearby. Their ability to select certain speakers was tested using the new algorithm, the current standard algorithm and no algorithm. Boyd helped collect a large part of the data and was the main author of the newspaper.
Apply science beyond hearing loss: ADHD and autism
By reporting their results, the researchers wrote that “the biological inspiration algorithm led to robust intelligibility gains under conditions under which a standard beam training approach failed. The results provide convincing support for the potential advantages of organic inspiration algorithms to help people with a hearing loss in” cocktails “.
They are now in the early test stages of an improved version that incorporates eye tracking technology to allow users to better direct their listening attention.
Science fueling algorithm could also have implications beyond hearing loss.
“The circuits (neural) that we are studying are much more general and more fundamental goals,” explains the senator, “it has to do with attention, where you want to focus – that is why the circuit was really built.
More information:
Alexander D. Boyd et al, a brain -inspired algorithm improves the “cocktail” listening to people with hearing loss, Communications engineering (2025). DOI: 10.1038 / S44172-025-00414-5
Supplied by the University of Boston
Quote: Do you have trouble hearing in noisy places and crowded spaces? A new algorithm could help hearing aid users (2025, April 28) recovered on April 28, 2025 from
This document is subject to copyright. In addition to any fair program for private or research purposes, no part can be reproduced without written authorization. The content is provided only for information purposes.