As science continues to advance and make all our lives better, it can now do the same for people with severe speech disabilities, thanks to the development of a brain implant that translates neural activity into words almost instantly.
Specifically, the system, called a brain-computer interface (BCI), uses artificial intelligence (AI) to decode the participant’s electrical brain activity as they attempt to speak, as one man witnessed for himself in a recent test, according to a report by Nature published on June 11.
The device conveyed changes in tone as he asked questions, emphasized the words he wanted, and even allowed him to hum a string of notes in three different pitches. The latter is especially groundbreaking as it makes the device the first to reproduce natural speech features like tone, pitch, and emphasis.
How the thought-to-speech brain implant works
A synthetic voice imitated the participant’s voice, speaking for him within 10 milliseconds of the neural activity that indicated his intention to talk, which represents a significant improvement from the previous models that took three seconds to stream speech or even wait until the user had finished miming an entire sentence.
As it happens, the participant in the study was a 45-year-old man who lost his ability to speak clearly after developing amyotrophic lateral sclerosis (ALS), also called Lou Gehrig’s disease, which damages the nerves that control muscle movements, including those for speech, and which triggered the famous ‘Ice Bucket Challenge’ a few years back.
The man was still able to make sounds and mouth words, but his speech was slow and unclear. Five years after his symptoms started, he underwent surgery to insert 256 silicon electrodes, each 1.5-millimeter long, in a brain region that controls movement.
The scientists then trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds, and decode, in real-time, the sounds he attempts to produce rather than his intended words or the constituent phonemes. As the study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, explained:
“We don’t always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary. (…) In order to do that, we have adopted this approach, which is completely unrestricted.”
Commenting on the breakthrough, Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who wasn’t involved in the study, highlighted just how important this advancement is:
“This is the holy grail in speech BCIs. (…) This is now real, spontaneous, continuous speech.”
Meanwhile, NU-9, an experimental drug used in treating ALS has shown promise for Alzheimer’s disease, thanks to targeting a common underlying mechanism at work in both neurodegenerative diseases, giving scientists home they could use it to address the cause instead of just treating the symptoms.