Scientists develop brain-inspired algorithm to help hearing aids tune out noise
Scientists develop brain-inspired algorithm to help hearing aids tune out noise
With numerous hearing aid users facing the problem of background noise in crowded places, scientists have found a way to use the advances in artificial intelligence (AI) to give them a boost, developing a brain-inspired algorithm that clears up noisy conversations.
Indeed, researchers at Boston University have developed an algorithm to filter out background noise and improve the so-called ‘cocktail party’ listening problem for individuals with hearing loss, according to a Medical Xpress report published on April 28.
Specifically, they decided to benchmark against the industry’s currently standard noise-reduction algorithms and directional microphones or beamformers designed to emphasize sounds coming from the front, which they believe doesn’t really improve performance and might even make it worse.
With this in mind, these scientists, headed by Kamal Sen, the algorithm’s developer and a BU College of Engineering associate professor of biomedical engineering, patented the new algorithm called BOSSA (biologically oriented sound segregation algorithm).
Collaborating with researchers in his Natural Sounds and Neural Coding Laboratory, Sen has plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain, describing the process in a new paper.
Brain-inspired algorithm’s noise canceling system
Along the way, they observed inhibitory neurons – brain cells that help suppress particular, undesirable sounds – as the key mechanism. As it happens, different neurons tune to different locations and frequencies. In Sen’s words:
“You can think of it as a form of internal noise cancellation. (…) If there’s a sound at a particular location, these inhibitory neurons get activated.”
Hence, Sen’s team used the brain’s approach as the inspiration for the BOSSA, which uses spatial cues like the volume and timing of a sound to tune into or tune out of it, sharpening or muffling a speaker’s words as necessary. As Sen added:
“It’s basically a computational model that mimics what the brain does (…) and actually segregates sound sources based on sound input.”
Upon testing, the researchers reported that the brain-inspired algorithm “led to robust intelligibility gains under conditions in which a standard beamforming approach failed,” and that their results demonstrate the potential uses for the algorithm to help individuals with hearing loss in group settings.
How do you rate this article?
Subscribe to our YouTube channel for crypto market insights and educational videos.
Join our Socials
Briefly, clearly and without noise – get the most important crypto news and market insights first.
Most Read Today
Samsung crushes Apple with over 700 million more smartphones shipped in a decade
2Peter Schiff Warns of a U.S. Dollar Collapse Far Worse Than 2008
3Dubai Insurance Launches Crypto Wallet for Premium Payments & Claims
4XRP Whales Buy The Dip While Price Goes Nowhere
5Luxury Meets Hash Power: This $40K Watch Actually Mines Bitcoin
Latest
Also read
Similar stories you might like.