Consider a selection of stereotypical movie scenes: 1. A witness to a crime walks hesitatingly along a line of blank-faced characters trying to identify the culprit. 2. A defendant pleads with the jury from the dock: “ I didn’t do it – believe me!” 3. A victim screams: “All right – I’ll talk!” as some vile instrument of torture is applied to their flesh.
Now, in your mind, imagine the witness, defendant or victim lying in a brain scanner. Nearby, an image forms on a monitor. It’s fuzzy at first, but forms into a clear picture – maybe a face, an event or a string of words. The scanner has read the person’s mind and presented its contents.
The second type of scene has been a sci-fi staple for decades. But recently, ‘mind reading’ by brain imaging has taken some big steps into the real world.
German and US researchers recently produced speech that was communicated by a patient undergoing brain surgery. The words in the recording were translated from a readout of the electrical patterns generated in the patient’s brain.
This brain-to-text study is the latest to demonstrate that neurotelepathy – knowing what a person is experiencing by interpreting their brain activity – is a reality, even if it is still relatively crude. Last year, a group of Yale researchers produced digital reconstructions of faces that were being viewed by people in an fMRI scanner. Again, the source of the images was the pattern of activity detected in the viewers’ brains. The published results suggest the reconstructed faces are as recognisable, or more so, than traditional photofits.
Prof Marvin Chun, who ran the study in his lab at Yale, says it has finally given him an answer to the question so often asked by strangers when they learn he is a psychologist: “They want to know if I can read their mind,’ he says. “Now I have an answer. Yes. If I can get them in a scanner, I can.”
So far, there is a limit to what can be read. Alan Cowen, the Yale PhD student who designed the study, stresses that the volunteers willingly conveyed the information that was extracted. “We can only read active parts of the brain,” he explains. “So you couldn’t read passive memories – you would have to get the person to imagine the memory to read it. It’s a matter of time, and eventually, maybe 200 years from now, we’ll have some way of reading inactive parts of the brain. But that’s a much harder problem, as it involves measuring very fine details of brain structure that we don’t even really understand.”
Private thoughts
This doesn’t put paid to the issue of privacy, however, because you don’t necessarily have control over which parts of your brain are active. In a 2013 study, even more Hollywood-like, a Japanese group managed to recreate dreams. Brain activity was recorded from volunteers. This was then translated into a video of what they were likely experiencing during Rapid Eye Movement. The resulting films were more detailed than the dreamers’ own recollections of their experiences.
Neurotelepathy is possible because the location of brain functions is pretty consistent across individuals. Almost anyone who looks at a face will show activation in an area on the left of their brain, just behind the ear. Looking at inanimate objects stirs activity in a different area. Thinking sad thoughts will activate different areas to happy thoughts. Saying ‘aaaaah’ involves different neurones to saying ‘teeeee’.
Of course, there are differences between individuals. If you and I hear the word ‘moon’, our brains will not respond identically. For you, the word may jog images of astronauts, while in me it might trigger the notion of cheese. But activity correlating to hearing ‘oooo’, and imagining a silver disc will be common to us both. If you build a big enough database of different brains responding to the same things, you can arrive at a ‘signature’ for each stimulus.
One of the first studies to show that this method works was carried out at MIT in 2000. A group led by Prof Nancy Kanwisher showed images to volunteers while they were being scanned, then examined the readouts.
“Just by eyeballing the data, I correctly determined in over 80 per cent of the trials whether the subject was imagining faces or places,” says Kanwisher. “I worried for a long time before we published these data that people might think we could use an MRI to read their minds. Would they not realise the results obtained in my experiment were for a specific, constrained situation? That we used faces and places because we know which highly specific parts of the brain process those two categories? That we selected only cooperative subjects who were good mental imagers? And so on. I thought, ‘Surely, no one would try to use fMRI to figure out what somebody else was thinking?’”
But of course, people would.
“One day, I believe we’ll be able to send full, rich thoughts to each other directly using technology,” announced Facebook CEO Mark Zuckerberg during a Q&A session. “You’ll just be able to think of something and your friends will immediately be able to experience it too.”
There is a massive practicality gulf between current experiments and Zuckerberg’s vision. The brain-to-text experiment, for example, involved placing electrodes directly on the brains of patients during surgery. Meanwhile, the Yale face recognition study depended on a huge IT development project and hours of tedious fMRI scanning for the volunteers.
Even the most gung-ho neuroscientists hedge their bets about the future of neurotelepathy. Prof Jack Gallant at the University of California, Berkeley believes that thought-conveying helmets will eventually exist, but not for a long time.
“The most optimistic estimates are that you can recover one-millionth of the information that’s available in the brain at any given point in time,” Gallant says. “It’s probably smaller than that. Where we are today is just measuring a pale shadow of what you could potentially measure, if you had a better measurement technology.”
Health help
Even though movie-style thought transference is currently impossible, experimental neurotelepathy is slowly creeping into use. A ‘painometer’ is being developed that makes a person’s suffering visible to others. The consciousness level of patients undergoing surgery has been monitored to ensure they don’t start to feel the surgeon’s knife. Locked-in syndrome patients have been able to communicate simple thoughts such as ‘yes’ or ‘no’ just by thinking them. These applications have succeeded because the ‘signatures’ of the experiences they convey are less complex than those associated with face perception or speech. But the principle of reading information from a brain is already established.
Mind-reading devices that benefit the sick are ethically unchallenging, but the idea of technology that can look into your head and see things you would rather keep to yourself is a different matter.
So far, the only brain reader to get out of the lab is the lie detector based on EEG or fMRI. It has been around for a decade. Once, in India, it helped convict a man for murder, though UK and US courts generally refuse to allow it. The companies that sell it claim it can tell if a person is lying with 90 per cent accuracy. Yet these results come from closely controlled experiments and it is far less effective in the real world.
Legal aid
Sooner or later, neurotelepathy will almost certainly be good enough for law enforcement and intelligence agencies. Many find the prospect scary, but the cost of breaching mental privacy might outweigh the harm currently being done by our inability to see inside people’s heads. Conscious eyewitness recall is terrible, and mistaken recognition is responsible for more convictions of the innocent than all other factors combined. Most people can detect lying at little better than chance. And if information must be extracted, surely brain scanning is more humane than torture? Like all technology, its value will depend entirely on how it is used.
This article has been edited for the web. The original version of this article appears in theSeptember 2015issueofBBC Focus magazine
Follow Science Focus onTwitter,Facebook, Instagramand Flipboard