A stroke took her voice — nearly two decades later, AI helped her hear it again

At just 30 years old, Ann Johnson’s life changed in an instant. A devastating brainstem stroke left her completely paralyzed and unable to speak. Eighteen years later, thanks to groundbreaking artificial intelligence research, she finally heard her own voice again.

In 2005, Johnson was living a full, active life in Saskatchewan, Canada. She taught math and physical education at a high school, coached multiple sports teams, and had recently married. She was also a new mother and had confidently delivered a 15-minute speech at her wedding — something that would later take on deep significance.

One sunny afternoon, while playing volleyball with friends, Johnson suffered a sudden brainstem stroke. The damage was catastrophic. She lost the ability to move or speak, developing what doctors call locked-in syndrome — a rare condition that leaves a person mentally alert but almost entirely unable to communicate.

For years, Johnson could form words in her mind, but no sound would come out. Her mouth simply would not move.

A breakthrough years in the making

Nearly two decades passed before that silence was broken. In 2022, Johnson became the third participant in a clinical trial run by researchers at UC Berkeley and UC San Francisco. Their goal: to restore communication using a brain-computer interface that translates brain signals into speech.

The project builds on years of research into how the brain plans and produces speech. Scientists identified the precise brain regions responsible for forming spoken language, then developed AI models capable of decoding those signals — bypassing the damaged neural pathways that once connected Johnson’s thoughts to her voice.

Instead of reading thoughts, the system only responds when the user actively tries to speak. “We wanted her to be fully in control,” one researcher explained. If Johnson doesn’t attempt speech, the system remains silent.

Hearing herself again

To personalize the experience, researchers reconstructed Johnson’s voice using audio from her wedding speech. They paired it with a digital avatar and implanted a device over the area of her brain involved in speech planning. When Johnson attempted to say a sentence, the AI translated her brain activity into sound.

“What do you think of my artificial voice?” Johnson asked her husband during one early session.

For her, the moment was deeply emotional.

Previously, Johnson communicated using eye-tracking software, slowly selecting letters on a screen — about 14 words per minute. Natural conversation is closer to 160. Hearing her words spoken aloud again, even imperfectly, was transformative.

Rapid progress

Early versions of the technology had delays of up to eight seconds and sounded robotic. But recent advances, published in Nature Neuroscience, dramatically reduced that lag to about one second, allowing near real-time speech synthesis. The AI can now stream speech as the brain produces it, rather than waiting for an entire sentence to finish.

Researchers believe future versions could include realistic 3D avatars and wireless implants, making the technology far more practical for everyday life.

Looking ahead

Although Johnson’s implant was removed in 2024 for reasons unrelated to the study, she remains closely involved, offering feedback and ideas for improvement. She hopes future systems will be fully wireless and faster.

Her long-term dream is to work as a counselor in a rehabilitation center — using a neuroprosthesis to speak with patients facing life-altering injuries.

“I want them to see that their lives aren’t over,” she wrote. “Disabilities don’t have to stop us or slow us down.”

For Johnson, hearing her voice again wasn’t just a technological milestone — it was proof that even after 18 years of silence, connection is still possible.

Videos from internet