With numerous hearing aid users facing the problem of background noise in crowded places, scientists have found a way to use the advances in artificial intelligence (AI) to give them a boost, developing a brain-inspired algorithm that clears up noisy conversations.
Indeed, researchers at Boston University have developed an algorithm to filter out background noise and improve the so-called ‘cocktail party’ listening problem for individuals with hearing loss, according to a Medical Xpress report published on April 28.
Specifically, they decided to benchmark against the industry’s currently standard noise-reduction algorithms and directional microphones or beamformers designed to emphasize sounds coming from the front, which they believe doesn’t really improve performance and might even make it worse.
With this in mind, these scientists, headed by Kamal Sen, the algorithm’s developer and a BU College of Engineering associate professor of biomedical engineering, patented the new algorithm called BOSSA (biologically oriented sound segregation algorithm).
Collaborating with researchers in his Natural Sounds and Neural Coding Laboratory, Sen has plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain, describing the process in a new paper.
Brain-inspired algorithm’s noise canceling system
Along the way, they observed inhibitory neurons – brain cells that help suppress particular, undesirable sounds – as the key mechanism. As it happens, different neurons tune to different locations and frequencies. In Sen’s words:
“You can think of it as a form of internal noise cancellation. (…) If there’s a sound at a particular location, these inhibitory neurons get activated.”
Hence, Sen’s team used the brain’s approach as the inspiration for the BOSSA, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speaker’s words as necessary. As Sen added:
“It’s basically a computational model that mimics what the brain does (…) and actually segregates sound sources based on sound input.”
Upon testing, the researchers reported that the brain-inspired algorithm “led to robust intelligibility gains under conditions in which a standard beamforming approach failed,” and that their results demonstrate the potential uses for the algorithm to help individuals with hearing loss in group settings.