Has mindreading technology arrived? Or is the WEF fearmongering?
Last year, in Davos, Professor Nita Farahany told an audience at the World Economic Forum (WEF) that,
We can’t decode speech and we may never decode full thoughts from the brain, using simple wearable devices.
This year, in California, a team at UCSF (University of California, San Francisco) is using artificial intelligence to decode words that are consciously thought and turn them into speech, using,
... a device that translates brain activity into intended words.
Next step: Wireless technology that interprets brain signals representing thought and transmits them... to where?
According to Farahany, that next step has already been taken. But has it? Little proof has been offered to back Farahany's claims and no database has been revealed that matches specific brain signals to specific words, raising the possibility the WEF is fearmongering, making law abiding citizens worry whether the government knows their thoughts.
Here we present more details about Farahany's claims, leaving the reader to determine their reliability:
Big Brother is reading your mind
Farahany is a professor of law and philosophy at Duke University, whose focus is on the ethical, legal, and social implications of emerging technologies. The title of her book, “The battle for your brain: defending the right to think freely in the age of neurotechnology,” foresees challenges to freedom of thought in the very near future.
In her speech at Davos, she freely admits the pluses and minuses of mind-reading technology but insists that she doesn’t want it banned as she sees in it great potential. Her speech was introduced by Nicholas Thompson, CEO of The Atlantic, who described the technology in glowing terms:
It’s going to make you see and understand a wonderful future where we can use brainwaves to fight crime, be more productive, and find love...
Farahany's video opens with an animation of a futuristic office where employees wear brainwave earbuds which transmit their brainwave information to the office computer system. The information is fully accessible to the boss who rewards one employee for her “brain metrics” with a performance bonus.
Things swiftly take a turn, however, when those brain metrics are utilized by law enforcement:
A somber cloud has fallen over the office. Along with emails, text messages, and GPS location data, the government has subpoenaed employees’ brain data from the past year. They have compelling evidence that one of your coworkers has committed massive wire fraud, and now they are looking for synchronized brain activity between your coworker and the people he has been working with.
While you know you’re innocent of any crime, secretly you’ve been working with him on a new start-up venture. Shaking, you remove your earbuds.
How can we swipe "escape"?
The animation ends there and Farahany begins her talk by stressing that this technology not only already exists but is already being used. She adds that many people are already primed to accept it as part of life, given that so many of us use Apple watches and other wearable devices that monitor certain of our internal processes.
During the Trump-Harris debate, some suspected that the earrings worn by Harris were in fact transmitters, helping her out with her responses. Earring transmitters already exist. So do other wearables such as headbands, hats, earbuds, and even what appear like tattoos behind the ears, all of which can not only transmit but also decode brain signals.
"Swiping" with your mind to create a seamless interaction of technology is, Farahany says, part of,
... an exciting and promising future. But also a potentially scary one.
We can’t literally decode complex thought yet, but there’s already a lot we can decode...
It's for your own safety...
Her first example of the “promise” inherent in the new technologies is “a safer workplace.” Monitoring brain metrics of truckers, for instance, could potentially avert accidents such as those that occur due to driver fatigue (which would apply to pilots and train drivers too).
Using a simple hat with embedded electro-sensors that score 1 to 5 on alertness, the employer gains important information in real-time.
It’s noteworthy that Farahany states that it’s the employer gaining access to the brain metrics, not the employee. Society must be protected from reckless truck drivers who keep going for hours on end and cause fatal accidents, is the unstated case for use of such technology.
It doesn’t seem to have occurred to her that drivers don’t usually keep going for hours and hours for the fun of it, but rather, because their salary depends on getting the goods to their destination on time, and that driving for x-amount of hours straight may be the only way to achieve that, if their bosses impose unrealistic targets.
It will help employers to be even more considerate of their staff...
Farahany also describes another scenario in which open access to staff brain metrics could create a safer environment for all — in a warehouse, where workers’ brain sensors reveal that they’re getting over-exhausted and robots deployed by their sympathetic employers adjust conditions to give them enough time and rest to keep going.
Again, it doesn’t seem to have occurred to her that employers could use the same brain metric information to weed out exhausted employees and replace them with more robust workers.
That said, she admits to the “dystopian” potential in brain monitoring, pointing out that train drivers in China are already required to wear special hats that transmit their brain signals to their employers, who can monitor their fatigue levels.
This may be something we want to embrace as a society.
Done well, neurotechnology has extraordinary promise. Done poorly, it could become the most oppressive technology we’ve ever introduced.
It will reduce workplace stress...
Farahany also cites examples of brainwave-monitoring technology in use in the United States. Amazon has already been experimenting with such devices, testing them on warehouse employees. So have Tesco and Walmart, resulting in employee protests and undermining their morale, given that the aim is to detect time-wasting on the job and employees taking unscheduled breaks.
During the “pandemic,” she adds, many companies that switched to virtual offices used some form of monitoring to keep tabs on their employees. She singles out Microsoft for its study of employees using Zoom for meetings, which revealed that virtual meetings are more stressful than those conducted in-person, partly because of the different backgrounds on people’s screens.
Streamlining the backgrounds was an easy and cost-free measure to reduce stress levels. One imagines that it wasn’t the only measure that Microsoft implemented as a result of brainwave monitoring.
And it will make you more productive
Next, as she beams up an image of futuristic-looking earbuds on the screen, Farahany describes how,
... the newest way to monitor attention is through a device like this: earbuds launching later this year [i.e., late 2023] which can pick up brainwave activity and tell whether person is paying attention or their mind is wandering...
It turns out you can not only tell whether someone is paying attention — you can also discriminate between the kinds of things they’re paying attention to. When you combine this with other types of surveillance, the power becomes quite precise.
She also describes a new technology developed by a laboratory at MIT which “gives a person a buzz, literally, when their mind starts to wander.”
Brain implants that grant the power of speech
All these developments support Farahany’s assertion that “more and more of what’s in the brain will become transparent.” Things are moving so fast, she added, that related legislation “can’t keep up.” Last year, she stated that “we can’t decode speech,” and yet by now, that is already happening, as reported in an article on JAMA Network (the Journal of the American Medication Association).
The article is based on an interview with Edward Chang MD, codirector of the Center for Neural Engineering and Prostheses at UCSF. He and his team claim to have succeeded in developing technology that enables people with aphasia (who have lost their power of speech, usually due to stroke) to “speak.”
Where we began was really about trying to understand the basic principles by which electrical activity in neurons controls the muscles in the vocal tract to give rise to all of the different sounds that we make when we speak. What I’m referring to is how the lips, the jaw, and the tongue shape the breath in order to create sounds like consonants, vowels, and even the intonation of the voice when we’re speaking.
... it became very, very clear that this would have potential medical applications for people who have lost the ability to communicate because of paralysis.
Chang’s team have used a variety of techniques to decode brain signals, including research conducted during brain surgery:
I specialize in brain surgeries where we have patients awake and we do language mapping. Part of my lab is literally in the operating room, where we’re constantly learning and understanding how the brain is giving rise to words and testing that and, most importantly, protecting that during surgeries when we have a mission to either remove the spot causing seizures or remove a brain tumor.
That’s what we call brain mapping, and it’s a really core part of my clinical practice, which is synergistic with our broader goals.
The next step was to digitalize the decoding process, using electrodes placed on the brain:
Once we figured out that there was this clear correlation between the brain activity patterns to specific sounds of speech, we applied for approval to do a study and we asked the FDA if we could use a new kind of device, essentially an electrode array that lays on the brain surface.
The array electrodes are connected to a port that has about 253 channels that stream the data from the brain activity to a computer that analyzes the brain signals and uses AI to translate that brain activity into specific intended words or speech sounds or even avatar facial movements.
From 256 squiggly lines to 70 words per minute
Central to this process has been machine learning, AI that is capable of processing stupendous amounts of data and finding and interpreting patterns, linking specific brainwaves to words:
The secret to this has really been ... machine learning. Ninety-nine percent of our time on this has nothing to do with the surgery or the device itself: it has to do with understanding the data
When you look at the screen, it looks like 256 squiggly lines, which are the brainwaves ... [We] use recurrent neural networks to take that sequence of data points that looks like squiggly lines and translate that into words.
Chang stresses that the interpretation is based on brainwaves created by participants actually reading words on a screen, not just thinking random thoughts:
The basic task works by asking our participants to read text on a screen — not just thinking about what they’re seeing, but actually trying to say the words. They have to be volitionally trying to say it.
He has worked with several aphasic patients, two of whom had been unable to speak for over 15 years:
Our last participant was able to get up to about 70 words per minute with this device. Normal speech is about 150 words per minute, and we still have a lot of room for improvement. But it was fast and the vocabulary was above a thousand words. We modeled it to 30,000 words, where it continued to have good performance.
From 15 years as a mute quadriplegic to 2 weeks that restore speech
Like Farahany, Chang highlights the incredible leaps the technology has taken during just the past few years.
This field is moving so fast that it’s hard for me to predict. I couldn’t even predict that we were going to have the performance levels that we had last year. And the algorithms that we’re starting to use are just becoming more and more powerful.
We didn’t even have these tools 5 years ago. And now, we’re getting very, very close ... [to] what we’d want for something that’s clinically available. In the beginning, it took a couple of months. With our latest participant, it took basically 2 weeks to train the algorithms up.
And also like Farahany, Chang stresses the incredible potential inherent in the technology, which certainly promises a very different type of life for those who regain the ability to express themselves:
I think our big focus is really about full expression ... to restore people to who they are...
For our last participant in the study, we had this very unique opportunity where we had this 1 hour of video from her wedding, which was 20 years ago, and we were able to train the AI on her pre-injury voice. So it wasn’t just like any out-of-the-box canned voice that you have in your computer. This was actually her voice that was trained to do this model, and that’s possible now with AI. With the avatar, she was able to choose the color of the hair, the eyes.
We’re really talking about a whole new level of personalization and expression with this ... Even the intonation of the voice is something that we want to get right.
Wireless mindreading
Farahany was asked at the WEF whether brain metrics could be measured without the use of a wearable device and answered flatly: “No.”
Now, a year later, Chang and his team are working on upgrading the abilities of the technology they have pioneered, with the intention of making it completely wireless:
We are working on the electrical engineering and the hardware engineering to make this a fully embedded wireless system so that you don’t have to be connected to a wire on a port in the scalp. It’ll all be transmitted wirelessly to a receiver and powered that way as well.
Asked whether the technology could be used to decode thoughts that are not “mentally verbalized,” Chang distinguished between the type of brain metrics being used:
I think what we’ve shown is that when someone has that voluntary intent to speak, we can decode that. But like I said earlier, some of the things that we tried where people were just thinking about stuff doesn’t work as well.
One reason for this is that “verbalized” thoughts seem to be processed in a different part of the brain than other types of thoughts:
The electrode sensors on the brain aren’t necessarily tapping where the random thoughts are happening in the brain. They’re really recording from the part of the brain that controls the vocal track muscle.
But Chang, like Farahany, predicts that in the future, things will likely go a lot further, which will introduce complex privacy issues:
So the way that we’ve designed our work currently is about what is truly volitional, what is meant to be expressed by the individual. But at the same time, it’s also telling us that it may be potentially possible to go beyond that in the future, into things that are more private, that are more inner speech, for example, inner thoughts perhaps even.
Who can be trusted with this?
Addressing her audience, Farahany asks, “Are you ready for this?” She presents a future — a very imminent future — in which “more and more of what’s in the brain will become transparent.”
“This can be something that unlocks potential for humanity,” she stresses, but adds that,
I believe that there is a pathway forward with such technology but it’s putting it in the hands of employees, enabling them to use it as a choice, not using it as a measure of brain metrics to fire or hire or discriminate against them.
While she admits that persuading employers to respect freedom of thought and privacy of brain metrics will be a challenge, she seems to believe that encouraging widespread respect of “a fundamental right to what it means to be human” could be enough to protect people:
I believe we have to start by recognizing a right to cognitive liberty, a right to self-determination over our brains and mental experiences.
What neither Chang nor Farahany address are inherent limitations in “mindreading” technology, even though the animation Farahany screened introduced very real problems — for instance, how will the innocent coworker of the arrested employee prove that while her brain metrics synch with those of the presumed criminal, she wasn’t actually plotting together with him?
Chang, too, is less than forthcoming about some key aspects of the technology he is developing. When his interviewer asks him how he can verify whether the output genuinely reflects the thoughts projected by his aphasic subjects, he evades the question.
Without doubt, a person who has suffered neurological injury and lost the power of speech will jump at the chance of regaining it, even if the technology is imperfect, and even if it opens up the risk of not only his words but also his thoughts becoming transparent to others.
However, it remains very unclear just how the “dystopian possibility” Farahany describes can be averted, and how we can protect ourselves against a tool that, in her words,
... really could become the most oppressive technology we have ever unleashed.