medicine
August 23, 2023 | 5:32 PM
A woman who was unable to speak for years after being paralyzed by a stroke has regained the ability to speak thanks to artificial intelligence.
This breakthrough procedure uses a 253 electrode array. These electrodes were implanted in the brain of Ms. Johnson, 48, and connected to a bank of computers via a small port connection attached to her head.
Electrodes covering areas of her brain where voice is processed intercept her brain signals and send them to a computer, creating a brown-haired avatar representing Johnson.
An on-screen avatar of Johnson’s own choosing can “speak” what she thinks, using a 15-minute copy of the voice she uttered at a wedding toast several years ago.
Avatars also use facial expressions such as blinking eyes, smiles, pursed lips, and raised eyebrows to make them look more real.
“We’re just trying to get back to being human,” says Dr. Edward Chan, director of neurosurgery at the University of California, San Francisco. told the New York Times.
Johnson was a high school math teacher and former volleyball and basketball coach in Saskatchewan. He was married for two years and had two children, but a stroke left him paralyzed.
“It’s very hard not being able to hug and kiss my children, but that was my reality,” Johnson said. “The nail in the coffin was when I was told I couldn’t have any more children.”
After years of rehabilitation, she gradually regained some movement and facial expression, but Johnson remained unable to speak and was unable to eat until swallowing therapy allowed her to eat finely chopped and soft foods. I had to be tube fed.
“My daughter and I love cupcakes,” Johnson said.
The UCSF team, along with colleagues at the University of California, Berkeley, said this is the first time that speech and facial expressions have been synthesized from brain signals.
To train the AI system, Johnson had to silently “repeat” various phrases from his 1,024-word vocabulary over and over again until the computer recognized the brain activity pattern associated with each sound.
AI programs were taught to recognize phonemes, the units of sound that make up spoken words, rather than whole words. For example, “Hello” contains four phonemes: “HH”, “AH”, “L”, and “OW”.
By recognizing 39 phonemes, the AI program can decode Johnson’s brain signals into complete words at a rate of about 80 words per minute. That’s roughly half the speed of a normal person-to-person conversation.
![Johnson collaborates with UCSF researchers to develop AI systems](https://nypost.com/wp-content/uploads/sites/2/2023/08/NYPICHPDPICT000026650276.jpg?w=1024)
![Johnson collaborates with UCSF researchers to develop AI systems](https://nypost.com/wp-content/uploads/sites/2/2023/08/NYPICHPDPICT000026650276.jpg?w=1024)
Shawn Metzger, who developed the decoder in a joint bioengineering program at the University of California, Berkeley and the University of California, San Francisco, told the Southwest News Service that the program’s “accuracy, speed and vocabulary are extremely important.” said.
“This creates the potential that users will eventually be able to communicate at about the same speed as us and have more natural, normal conversations.”
The team is currently working on a wireless version. This means that the user does not have to physically connect to the computer with wires or cables.
Chang has been working on brain-computer interfaces for more than a decade, and hopes the team’s innovations will soon lead to systems that can generate speech from brain signals.
“Our goal is to restore the most natural way for us to talk to others, a complete and concrete way of communicating,” Chan told SWNS.
“These advances bring us a lot closer to making this a real solution for patients,” added Chan.
load more…
{{#isDisplay}}
{{/isDisplay}}{{#isAniviewVideo}}
{{/isAniviewVideo}}{{#isSRVideo}}
{{/isSR video}}