Casey Harrell tests the BCI system for the first time. Credit: UC Regents
A new brain-computer interface (BCI) developed at UC Davis Health translates brain signals into speech with up to 97 percent accuracy—the most precise system of its kind.
Researchers implanted sensors into the brain of a man with severe speech impairment due to amyotrophic lateral sclerosis (ALS). The man was able to communicate his intention to speak within minutes of activating the system.
A study on this work was published in the New England Journal of Medicine.
ALS, also known as Lou Gehrig’s disease, affects nerve cells that control movement throughout the body. The disease causes a progressive loss of the ability to stand, walk, and use one’s hands. It can also cause a loss of control of the muscles used for speaking, leading to a loss of speech comprehension.
This new technology is being developed to restore communication in people who cannot speak due to paralysis or neurological disorders such as ALS. It can interpret brain signals when the user tries to speak and turn them into text that is “spoken” out loud by the computer.
“Our BCI technology helped a paralyzed man communicate with friends, family and caregivers,” said UC Davis neurosurgeon David Brandman. “Our paper demonstrates the most accurate speech neuroprosthesis ever described.”
Brandman is the co-principal investigator and co-senior author of this study. He is an assistant professor in the Department of Neurological Surgery at UC Davis and co-director of the UC Davis Neuroprosthetics Laboratory.
New BCI breaks the communication barrier
When a person tries to speak, the new BCI device translates their brain activity into text on a computer screen. The computer can then read the text aloud.
Casey Harrell with her personal assistant Emma Alaimo and UC Davis neuroscientist Sergey Stavisky. Credit: UC Regents
To develop the system, the team enrolled Casey Harrell, a 45-year-old man with ALS, in the BrainGate clinical trial. At the time of his enrollment, Harrell had weakness in his arms and legs (tetraparesis). His speech was very difficult to understand (dysarthria), and he needed help from others to interpret his speech.
In July 2023, Brandman implanted the experimental BCI device. He placed four microelectrode arrays in the left precentral gyrus, a region of the brain responsible for coordinating speech. The arrays are designed to record brain activity from 256 cortical electrodes.
“We’re actually detecting their attempts to move their muscles and speak,” says neuroscientist Sergey Stavisky. Stavisky is an assistant professor in the department of neurological surgery. He is co-director of the UC Davis Neuroprosthetics Lab and co-principal investigator on the study.
“We record the part of the brain that’s trying to send these commands to the muscles. We listen to that and translate those patterns of brain activity into a phoneme, like a syllable or a unit of speech, and then into the words that the muscles are trying to say.”
Faster training, better results
Despite recent advances in BCI technology, efforts to enable communication have been slow and error-prone. That’s because the machine-learning programs that interpreted brain signals required a lot of time and data to run.
“Previous speech BCI systems often had word errors, making it difficult for the user to be understood consistently and creating a barrier to communication,” Brandman says. “Our goal was to develop a system that would allow a person to be understood whenever they wanted to speak.”
Harrell used the system in both spontaneous and guided conversation settings. In both cases, speech decoding was done in real time, with continuous updates to the system to ensure it was working properly.
The decoded words were displayed on a screen. Amazingly, they were read aloud in a voice that sounded like Harrell’s before he got ALS. The voice was composed using software trained on existing audio samples of his pre-ALS voice.
Nicholas Card, a postdoctoral researcher and lead author of the study, prepares the BCI system. Credit: UC Regents
In the first training session on speech data, the system took 30 minutes to achieve 99.6% accuracy with a 50-word vocabulary.
“The first time we tried the system, he cried tears of joy when the words he was trying to say correctly appeared on the screen. We all did it,” Stavisky said.
In the second session, the potential vocabulary size was increased to 125,000 words. With only an additional 1.4 hours of training data, the BCI achieved 90.2% accuracy with this significantly expanded vocabulary. After continued data collection, the BCI maintained 97.5% accuracy.
“At this point, we can correctly decode what Casey is trying to say about 97 percent of the time, which is better than many commercially available smartphone apps that try to interpret a person’s voice,” Brandman said. “This technology is transformative because it gives hope to people who want to talk but can’t. I hope technologies like this speech BCI will help future patients talk with their family and friends.”
The study involved 84 data collection sessions over 32 weeks. In total, Harrell used the voice BCI in self-paced conversations for more than 248 hours to communicate in person and via video chat.
“Not being able to communicate is very frustrating and demoralizing. It’s like being trapped,” Harrell said. “Technology like this will help people get back to life and society.”
“It’s incredibly gratifying to see Casey regain his ability to talk with his family and friends through this technology,” said the study’s lead author, Nicholas Card. Card is a postdoctoral fellow in the Department of Neurological Surgery at UC Davis.
“Casey and the other BrainGate participants are truly extraordinary. They deserve credit for joining these early clinical trials. They are doing so not because they hope to gain personal benefit, but to help us develop a system that will restore communication and mobility to other paralyzed people,” said Leigh Hochberg, co-author and lead investigator of the BrainGate trial.
Hochberg is a neurologist and neuroscientist at Massachusetts General Hospital, Brown University and the VA Providence Healthcare System.
More information:
A precise and rapidly calibrated vocal neuroprosthesis? New England Journal of Medicine (2024). DOI: 10.1056/NEJMoa2314132
Quote:New brain-computer interface lets man with ALS ‘talk’ again (2024, August 14) retrieved August 14, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.