This story is part of a series on the current progression in Regenerative Medicine. This piece discusses advances in brain-machine interfaces.
In 1999, I defined regenerative medicine as the collection of interventions that restore normal function to tissues and organs damaged by disease, injured by trauma, or worn by time. I include a full spectrum of chemical, gene, and protein-based medicines, cell-based therapies, and biomechanical interventions that achieve that goal.
In the mid-20th century, A.E. van Vogt wrote about the potentiality of telepathy in his work Slan. In his writing, mutations in the human germline resulted in a human derivative species capable of telepathy. Thanks to new technology, such advances in the human species may be on the brink of the possible.
Telepathy is the translation of thought into transmissible signals without sound that can be received and understood either by computer or others. Emerging neural decoding technologies will soon allow for highly accurate translation of thoughts to written or spoken language. In a recent study for Nature, Dr. Xupeng Chen and colleagues from New York University translated thoughts into words wirelessly. This technology could revolutionize how we approach brain-machine communication in the coming years.
In previous months, I have described several brain-machine interface technologies that integrate speech translational technologies. However, most have either been limited in the efficacy or overbearing in their form factor. While these innovations are noteworthy, they are far from being implementable in the modern world.
Enter Chen and colleagues’ experiments, in which they implanted electrodes directly into the brain to record electrical activity, a method known as electrocorticography. It provides among the highest-level revolutions of recording we can gather, especially compared to noninvasive methods.
Chen and colleagues used a cohort of epilepsy cases already fitted with the requisite implant. They noted that their speech decoder faced two main challenges. First, deep-learning AI models for speech decoding would require a significant dataset, which did not exist at the time of their study. Second, individual speech production varies significantly in rate, tone, pitch, and other factors. Reconciling these issues into a generalized speech decoder would be challenging.
Their system included two main functions: neural decoding and speech synthesizing. To teach the synthesizer, the researchers fed spoken language to a speech encoder, which was analyzed for all the factors discussed above, such as tone and pitch. Then, they fed that information into the synthesizer, creating a database of speech parameters.
The decoder was taught in a similar fashion. It was fed small snippets of neural data to create a database from which the AI could infer longer neural signals in the future.
They also equipped their decoding model with three deep learning architectures – ResNet, Swin Transformer, and LSTM. Additionally, each learning architecture comes in two forms: causal and non-causal. Causal models only use past and current neural signals to generate speech, whereas non-causal models use past, present, and future neural signals. Future signals come in the form of auditory and speech feedback that is not available in a real-time application, meaning that while non-causal may be more accurate than causal, it is not as relevant for a real-time form of speech decoding.
After training the decoding system and early testing on the epilepsy cohort, the researchers found that the causal versions of ResNet and Swin Transformer were the most accurate architectures for the decoder and focused on these for further analyses.
Using their speech decoder across a range of subjects, the researchers made three observations, two of which came as a surprise and a third as a confirmation of previous assumption. One surprise was that the differences between left and right brain decoding were minimal. While early inclinations suggested that the speech-dominant left brain would be more efficacious in decoding from person to person, the data did not support this assumption. This could be helpful when the person being decoded has some brain damage or impediment isolated to one side of the brain.
In a second surprise, the researchers found that the density of the electrodes was less significant for decoding accuracy than once thought. While it was suggested that higher-density implants would yield more accurate neural signals and, therefore, more accurate speech translations, differences between high- and low-density implants were minimal. This is exciting, as low-density electrode implants could be much cheaper and could make the decoding system much more accessible if it were to reach a commercial market.
Thirdly, the researchers examined which cortical regions were most relevant for speech decoding. As suspected, it was confirmed that the sensorimotor cortex was the most heavily involved brain region, especially in the ventral portion. However, the researchers found that both left and right brain activation in these regions was similar, highlighting the potential of right brain neural prosthetics previously thought less optimal.
While a highly accurate speech decoder is a noteworthy creation, the future implications of such research are much more exciting.
Most relevant to this study, in particular, is that the researchers have made their neural decoding pipeline publicly available. This means their blueprint and foundation can be used in future developments to streamline advances in the field. We encourage all in this field to pursue similar utility routes moving forward.
Perhaps more speculatively, but certainly an eventuality, this technology will pave the way for the eventual wireless, implantless translation of thought into speech or action. Science fiction stories have often pondered telepathy, such as Slan by A.E. van Vogt. This type of technology, while once seemingly science fiction, could be another step on our path to this eventual future. I anticipate further advances in brain-machine interfaces shortly.
To read more of this series, please visit www.williamhaseltine.com