GameBaba Universe

Researchers Harnessed Video Game Tech To Give Voice To A Woman Paralyzed By Stroke

Researchers Harnessed Video Game Tech To Give Voice To A Woman Paralyzed By Stroke

Researchers at UC Berkeley and UC San Francisco (UCSF) teamed up with Edinburgh-based Speech Graphics to create the first brain-computer interface using a video game tech. The computer reads brain activities and electronically produces speech and facial expressions using an AI-generated avatar.

Ann with her husband Bill testing the brain-computer interface
Ann with her husband Bill testing the brain-computer interface

The tech showed a lot of promise when it was used on Ann, a woman paralyzed by a stroke. Ann had a stroke when she was 30 years old while playing volleyball with friends. The stroke inflicted her with locked-in syndrome and doctors were unable to explain why she had the condition.

According to researchers, the tech is a ray of hope that may one day help those who cannot speak to communicate naturally. The researchers hinted that the software used to create the facial animation and turn brain waves into a talking digital avatar was used in Hogwarts Legacy and The Last of Us Part II.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” said Dr. Edward Chang, the chairman of neurological surgery at UCSF.

ALSO READ: The Internet Has Exploded With PS5 Pro Rumors. Here Are The Best Concepts

The researchers first identified the part of Ann’s brain that controlled her speech and implanted a paper-thin rectangular strip close to the surface of the area. The strip contained 253 electrodes. These electrodes read her brain signals (which would have gone to her speech muscles if not for the stroke), decoded, and transmitted them through a cable attached to the implant.

The decoded signal goes to a computer where AI algorithms are trained to identify brain activity linked with a vocabulary of over 1,000 words read and translate the brain activity into speech. The researchers said the study was the first time facial animation was being generated from brain signals.

“These advancements bring us much closer to making this a real solution for patients,” Chang said.

With the help of the AI, Anna could write text and speak with the help of a voice synthesized from recordings of her voice during her wedding.

Speech Graphics provided the video game facial animation technology

Ann with her AI avatar generated using video game tech
Ann with her AI avatar generated using video game tech

Ann worked with the researchers for several weeks and helped train the AI to decode the brain activity into facial movements. Speech Graphics CTO and co-founder, Michael Berger was part of the study. The company leveraged its expertise in AI-based facial animation technology to closely simulate muscle contractions over the course of the study—including nonverbal activity.

In one of the trials, the researchers used Ann’s synthesized voice as input to the Speech Graphics system instead of her real voice to drive the muscles. The system transformed the muscle actions into 3D animation using a video game engine.

ALSO READ: Netflix Game Controller App For iPhone, Another Step Towards Becoming A Gaming Powerhouse?

The outcome of that trial was Ann’s lifelike avatar that pronounced words accurately in sync with the synthesized voice. In another revolutionary trial, the researchers merged Ann’s brain signals directly with the simulated muscles. The essence was for them to serve as a counterpart to Ann’s non-functioning muscles.

“Creating a digital avatar that can speak, emote, and articulate in real-time, connected directly to the subject’s brain, shows the potential for AI-driven faces well beyond video games,” Berger said. “When we speak, it’s a complex combination of audio and visual cues that helps us express how we feel and what we have to say. I hope that the work we’ve done in conjunction with Professor Chang can go on to help many more people.”

The video game tech was a quantum leap compared to previous attempts

Ann with researchers from UCSF
Ann with researchers from UCSF

What makes the new video game-inspired study better than the previous attempts was that it allowed Ann to communicate at an average rate of 62 words per minute which is closer to the 160 words per minute of natural conversation. This new record is 3.4 times faster than the previous record set by a similar device.  

ALSO READ: 42-Year-Old Ape Video Gamer Displays Mind-blowing Minecraft Skills, Becomes Internet Sensation 

On a 50-word vocabulary, the computer interface recorded an error rate of 9.1% which is 2.7 times lower than a 2021 attempt by researchers using a state-of-the-art speech brain-computer interface (BCI). On a 125,000 vocabulary, the video game-inspired tech recorded 23.8%

“This is a scientific proof of concept, not an actual device people can use in everyday life,” said Frank Willet, the lead author of the research published in Nature. “But it’s a big advance toward restoring rapid communication to people with paralysis who can’t speak.”

Remember to share and bookmark this website to stay up to date on all the hottest news in the gaming industry.