New Technology Deciphers Thoughts Using fMRI and AI Language Models

PinIt

Combining the ability of fMRI to monitor neural activity with the predictive power of AI language models has resulted in a thought decoder.

Functional magnetic resonance imaging (fMRI) has transformed cognitive neuroscience. Still, its fundamental limitation is that neuroscientists cannot look at a brain scan and tell what someone is seeing, hearing, or thinking in the scanner. However, researchers are now one step closer to decoding internal experiences into words using fMRI and artificial intelligence language models according to a new study published in Nature Communications. This technology could benefit people who cannot outwardly communicate, such as those who have suffered strokes or are living with amyotrophic lateral sclerosis.

The AI language system an early relative of the model behind ChatGPT

Combining the fMRI’s ability to monitor neural activity with the predictive power of AI language models has resulted in a decoder that can reproduce, with a high level of accuracy, the stories that a person listened to or imagined telling in the scanner. However, the decoder is still in its infancy, requiring extensive training for each user, and it doesn’t construct an exact transcript of the words heard or imagined.

The team also tested the technology to see what might happen if someone wanted to resist or sabotage the scans. Study participants who tried to trick it by telling another story in their heads produced gibberish results.

See also: OpenAI Launches AI Dialogue Model ChatGPT

There are still obstacles, but researchers are cautiously optimistic

Researchers emphasize the need for proactive policies that protect the privacy of one’s internal mental processes. The decoder’s accuracy decreased when it struggled with grammatical features, such as pronouns, and proper nouns, such as names and places.

The biggest roadblock is fMRI itself, which doesn’t directly measure the brain’s rapid firing of neurons but instead tracks the slow changes in blood flow that supply those neurons with oxygen. Despite the limitations, the ability to translate imagined speech into words is critical for designing brain-computer interfaces for people unable to communicate with language.

The technology is many years away from being used as a brain-computer interface in everyday life, as the scanning technology isn’t portable and requires extensive customization. The AI models need to be trained to adapt and adjust to each user’s brain. Still, researchers hope that commonalities across people’s brains will be uncovered in the future to make the technology more accessible.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *