Artificial intelligence (AI) has been making waves in the tech industry for years, and now it has made a breakthrough in the field of neuroscience. A team of computational neuroscientists has developed a technique that uses AI to translate brain scans into words and sentences, potentially helping individuals with brain injuries or paralysis regain the ability to communicate.
The study was led by Alexander Huth, a computational neuroscientist at the University of Texas at Austin. Huth and his team used functional magnetic resonance imaging (fMRI) to track changes in blood flow within the brain and measure neural activity. They scanned the brains of three participants while each listened to roughly 16 hours of storytelling podcasts.
With that data, the researchers produced a set of maps for each subject that specified how the person’s brain reacts when it hears a certain word, phrase, or meaning. Using those maps and the language model GPT, they trained the AI to predict how the brain of a certain individual would react to language.
Initially, the system struggled to turn brain scans into language. But after incorporating GPT, it was able to produce words, phrases, and sentences that accurately matched what the person was hearing. The technology was particularly good at getting the gist of the story, even if it didn’t always get every word right.
The system could someday aid individuals who have lost their ability to communicate because of brain injury, stroke, or locked-in syndrome, a type of paralysis in which individuals are conscious but paralyzed. However, that will require not only advancing the technology by using more training data, but also making it more accessible.
Being based on fMRI makes the system expensive and cumbersome to use, but Huth says the team’s aim is to be able to do this with easier, more portable imaging techniques such as electroencephalogram (EEG), which measures brain activity via electrodes attached to the scalp.
Although the system is still in its early stages and far from perfect, it has the potential to transform the lives of individuals who have lost the ability to communicate. It could also raise ethical concerns about privacy and the possibility of mind reading.
“Our thought when we actually had this working was, ‘Oh my God, this is kind of terrifying,’” Huth recalls. To begin to address these concerns, the authors tested whether a decoder trained on one individual would work on another—it didn’t. Consent and cooperation also seemed to be critical because if individuals resisted, by performing a task such as counting instead of paying attention to the podcast, the system could not decode any meaning from their brain activity.
Privacy is still a big ethical concern for this type of neurotechnology, says Nita Farahany, a bioethicist at Duke University. Researchers should examine the implications of their work and develop safeguards against misuse early on. “We need everybody to participate in ensuring that that happens ethically,” she says. “[The technology] could be really transformational for people who need the ability to be able to communicate again, but the implications for the rest of us are profound.”
The potential uses of this technology are vast, not just for individuals with brain injuries, but for people in various fields such as law enforcement and marketing. However, the ethical implications of using this technology need to be thoroughly examined before it can be widely used.
The views expressed in this article are the author’s own and do not necessarily reflect Coverpage’s editorial stance.