To tap into the brain to help a paralyzed man speak

To tap into the brain to help a paralyzed man speak

Dr. Eddie Chang, a neurosurgeon at the University of California, San Francisco Medical School, helps Pancho, a man paralyzed for 20 years, speak through an implant in his brain that connects a computer program. (Mike Kai Chen / The New York Times)

He has not been able to speak since 2003, when he was paralyzed at the age of 20 from a serious stroke following a horrific car accident.

Now, at a scientific milestone, scientists have tapped into the speech areas of his brain – allowing him to produce comprehensible words and sentences by simply trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain send signals to a computer that displays his intended words on the screen.

His first recognizable phrase, researchers said, was, “My family is outside.”

Sign up for The New York Times The Morning Newsletter

The performance, published Wednesday in the New England Journal of Medicine, could ultimately help many patients with conditions that steal their ability to speak.

“This is longer than we ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University who was not involved in the project.

Three years ago, when Pancho, now 38, agreed to work with neuroscientists, they were unsure whether his brain had retained the mechanisms of speech at all.

“That part of his brain could have been asleep, and we just did not know if it would ever really wake up for him to speak again,” said Dr. Edward Chang, president of neurological surgery at the University of California, San Francisco, who led the research.

The team implanted a rectangular sheet of 128 electrodes designed to detect signals from speech-related sensory and motor processes associated with the mouth, lips, jaw, tongue and larynx. For 50 sessions over 81 weeks, they connected the implant to a computer using a cable attached to a port in Pancho’s head, asking him to try to say words from a list of 50 common ones he helped suggest, including “hungry”, “music” and “computer”.

As he did, electrodes sent signals through a form of artificial intelligence that tried to recognize the intended words.

“Our system translates the brain activity that would normally have guided his vocal path directly into words and sentences,” said David Moses, a postdoctoral engineer who developed the system with Sean Metzger and Jessie R. Liu, graduate students. The three are lead authors of the study.

Pancho (who only asked to be identified by his nickname to protect his privacy) also tried to say the 50 words in 50 different sentences like “My nurse is right outside” and “Bring my glasses, thank you” and in answer to questions like “How are you today?”

His response that appears on the screen: “I am very good.”

In almost half of the 9,000 times Pancho tried to say single words, the algorithm got it right. When he tried to say sentences written on the screen, he did even better.

By pulling algorithm results through a kind of autocorrect language prediction system, the computer correctly recognized individual words in the sentences almost three-quarters of the time and perfectly decoded whole sentences more than half the time.

“Proving that you can decipher speech from the electrical signals in your brain’s speech motor range is groundbreaking,” said Fried-Oken, whose own research involves trying to detect signals using electrodes in a hood placed on his head. implanted.

After a recent session, observed by The New York Times, Pancho smiled, wearing a black fedora over a white knitted hat to cover the harbor, and tilted his head slightly with the limited movement he has. In outbursts of gravel, he demonstrated a sentence composed of words in the study: “No, I’m not thirsty.”

In several weeks of interviews for this article, he communicated via email exchange using a head-controlled mouse to carefully write key-by-key, the method he usually trusts.

The brain implant’s recognition of his spoken words is “a life-changing experience,” he said.

“I just want, I do not know, to get anything good because I have always been told by doctors that I have 0 chances to get better,” Pancho wrote during a video chat from the Northern California nursing home where he lives.

He later emailed: “Not to be able to communicate with anyone, to have a normal conversation and express yourself in any way, it’s destructive, very hard to live with.”

During research sessions with the electrodes, he wrote: “It’s a lot like getting another chance to talk again.”

Pancho was a healthy field worker in California’s vineyards until a car accident after a football game one summer Sunday, he said. After surgery for severe stomach injury, he was discharged from the hospital, walking, talking and thinking he was on the road to recovery.

But the next morning “he threw up and could not hold myself up”, he wrote. Doctors said he experienced a brainstem stroke, apparently caused by a blood clot after surgery.

A week later, he woke up from a coma in a small, dark room. “I tried to move, but I could not lift a finger, and I tried to speak, but I could not spit a word out,” he wrote. “So I started crying, but since I could not make any sound, the only thing I made was some ugly gesture.”

It was scary. “I wish I never came back from the coma I was in,” he wrote.

The new approach, called a speech neuroprosthesis, is part of a wave of innovation aimed at helping tens of thousands of people who lack the ability to speak but whose brains contain neural pathways for speech, said Dr. Leigh Hochberg, a neurologist with Massachusetts General Hospital, Brown University and the Department of Veterans Affairs, who was not involved in the study but co-wrote an editorial about it.

It can include people with brain damage or conditions such as amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease) or cerebral palsy, where patients do not have sufficient muscle control to speak.

“The urgency cannot be overstated,” said Hochberg, who leads a project called BrainGate that implants smaller electrodes to read signals from individual neurons; it recently decoded a paralyzed patient’s attempt at handwriting.

“It’s now only a matter of years,” he said, “before there will be a clinically useful system that allows for the restoration of communication.”

For years, Pancho communicated by spelling words on a computer using a marker attached to a baseball cap, a harsh method that allowed him to write about five correct words per second. Minute.

“I had to bend / bend my head forward, down and stick a key letter one by one to write,” he emailed.

Last year, researchers gave him another device that involved a head-controlled mouse, but it’s still not as fast as the brain electrodes in the research sessions.

Via the electrodes, Pancho communicated 15 to 18 words per minute. That was the maximum rate the study allowed because the computer was waiting between instructions. Chang says that faster decoding is possible, although it is unclear whether it will approach the pace of the typical conversation: approx. 150 words per Minute. Speed ​​is a key reason why the project focuses on speaking, pressing directly on the brain’s word production system rather than hand movements involved in writing or writing.

“It’s the most natural way for people to communicate,” he said.

Pancho’s lively personality has helped researchers navigate challenges, but also occasionally makes speech recognition uneven.

“Sometimes I can’t control my emotions and laugh a lot and not do too well with the experiment,” he emailed.

Chang recalled times when “after you successfully identified a phrase,” you could see him visibly shaking, and it looked like he was some kind of giggle. “When it happened, or when he yawned or was distracted during the repetitive tasks,” it did not work so well because he was not really focused on getting those words. So we have some things to work on because we obviously want that it works all the time. ”

The algorithm sometimes confused words with similar phonetic sounds and identified “to go” as “bring”, “do” as “you” and words beginning with “F” – “faith”, “family”, “feel” – as a V -word, “much.”

Longer sentences needed more help from the language prediction system. Without it, “How do you like my music?” was decoded as “How do you like bad bring?” and “Hi how are you?” became “Hungry, how are you?”

But in sessions where the pandemic was interrupted for several months, accuracy improved, Chang said, both because the algorithm learned from Pancho’s efforts and because “there are certain things that change in his brain,” which helps it “light up. and show us the signals we need to get those words out. ”

Prior to his stroke, Pancho had only attended school up to sixth grade in his native Mexico. With remarkable determination, he has since graduated from high school, taken college education, received a web developer certificate, and started studying French.

“I think the car wreck made me a better person, and smarter too,” he emailed.

With its limited wrist movement, Pancho can maneuver an electric wheelchair by pressing the joystick with a filled sock tied around the hand with elastics. In the shops, he will hover close to something until the cashier deciphers what he wants, like a cup of coffee.

“They place it in my wheelchair and I bring it back to my home so I can get help drinking it,” he said. “The people here at the facility find themselves surprised, they always asked me, ‘HOW DID YOU BUY IT, AND HOW DID YOU TELL THEM WHAT YOU WANT !?'”

He also works with other researchers using the electrodes to help him manipulate a robotic arm.

His speech sessions twice a week can be difficult and exhausting, but he always looks forward to waking up and getting out of bed every day waiting for my UCSF people to arrive. ”

The speech study is the culmination of over a decade of research in which Chang’s team mapped brain activity for all vowel and consonant sounds and knocked into the brains of healthy people to produce computer speech.

Researchers stress that the electrodes do not read Pancho’s mind, but detect brain signals corresponding to every word he tries to say.

“He thinks the word,” Fried-Oken said. “It’s not random thoughts that the computer picks up.”

Chang said “in the future, we may be able to do what people think,” raising “some really important questions about the ethics of this type of technology.” But this, he said, “is really only about restoring the voice of the individual.”

In more recent assignments, Pancho memorizes words silently and spells less common words using the military alphabet: “delta” for “d”, “foxtrot” for “f.”

“He really is a pioneer,” Moses said.

The team also wants to design implants with more sensitivity and make it wireless for complete implantation to avoid infection, Chang said.

As more patients participate, researchers can find individual brain variations, Fried-Oken said, adding that if patients are tired or ill, the intensity or timing of their brain signals may change.

“I just wanted to be able to do something for myself somehow, even a little bit,” Pancho said, “but now I know I’m not just doing it for myself.”

© 2021 The New York Times Company

Leave a Comment