New AI technology translates complex thoughts to text

AI technology recently developed by scientists at the University of Texas can translate a person’s thoughts into written text, raising questions about the future of thought privacy.

The technology, called a semantic decoder, is non-invasive. It uses an fMRI to read brainwaves from various regions of the brain and, though it does not read text, it reads semantic representations — the “gist” — of the person’s thought.

In a study published by the University of Texas team documenting their findings, the researchers conducted experiments in which participants listened to podcasts and then had their thoughts translated by the AI system to see if there was a connection between the two. One participant who heard the words “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Another who heard “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” had their thoughts decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’”

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said UT Austin’s Assistant Professor of Neuroscience and Computer Science in a statement. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

Participants were also shown videos, after which their thoughts were accurately translated by the artificial intelligence program.

The university says there are many use cases for the AI technology such as interpreting the thoughts of stroke victims. But it also carries far-reaching implications about how the system might be used by other actors like law enforcement to forcefully extract information from citizens.

While the technology requires cooperation from the participant in order to work, that does not necessarily preclude government entities from being able to coerce that cooperation and extract private thoughts.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said University of Texas computer science doctoral student Jerry Tang, though he did not explain how. “We want to make sure people only use these types of technologies when they want to and that it helps them.”