Thanks to properly trained brain-machine interfaces and above all AI, two laboratories managed to give speech back to two women who were deprived of it.
Losing speech following an illness or brain injury often leads to the isolation of people suffering from this condition. So that they can once again express their thoughts, their emotions and their needs, scientists have been working for a long time onimplantscerebral combined with powerfulAI. Two recent studies have thus made it possible to give voice to two women. One is paralyzed following astroke, when the other suffers fromneurodegenerative diseaseprogressive.
Thus, in the United States, with the solution of researchers from UC San Francisco (UCSF) and UC Berkeley, the woman suffering from a stroke was able to express herself via a virtual avatar. The AI algorithms used transform brain signals into speech and facial expressions. To do this, scientists implemented a thin rectangle of 253electrodeson the surface of itsbrainon critical areas related to speech. These are the same areas that would have animated the muscles of the lips, tongue, jaw and voice box without this stroke. These electrodes are connected by a cable to computers. The patient had to collaborate with researchers to train the AI algorithms to recognize her unique brain signals.
So for weeks, she had to repeat different sentences with a base of 1,024 words. For the AI, it wasn’t about recognizing whole words, but rather determining them based on less than 39phonemes. It was therefore necessary to design a voice synthesis system made up of old audio recordings of the patient’s voice. The process is capable of decoding a large vocabulary and transforming it into text at aspeedof 78 words per minute, with an error rate of 25%. This is almost half the natural conversation speed, but it is already a huge progress for this woman who can now communicate with her husband. Researchers are now working to create a wireless version of this brain-machine interface .
Predict words from phonemes
In the other case, the Stanford Medicine laboratory managed to transcribe in text form the brain activity of a woman suffering from severe neurological disease degenerative. This 68-year-old patient remains capable of formulating instructions to generate phonemes. To exploit them, Stanford Medicine researchers implemented two tiny networks of sensors on the surface of the patient’s brain . They are found in two distinct regions of speech production. Each of these networks has 64 electrodes. They connect to the cortex cerebral with a depth of 3.5 millimeters.
Here again, the AI was trained to distinguish the nuances of brain activity emitted around the formulation of 39 phonemes. At the end of 25 sessions during which 260 to 480 sentences were repeated, the system was able to reconstruct the words associated with these phonemes. The error rate was limited to 9.1% on around fifty words. This rate increased to 23.8% per 125,000 words. The conversion speed in this case was 62 words per minute.
In these two situations, although the processes are promising, they remain, for the moment, unfortunately limited to the environment of a laboratory, but this is already enormous progress!