Menu
Aeon
DonateNewsletter
SIGN IN

The Mind Reader (1933) starring Warren Williams. Public domain photo

i

Super-cooperators

Clear and direct telepathic communication is unlikely to be developed. But brain-to-brain links still hold great promise

by Gary Lupyan & Andy Clark + BIO

The Mind Reader (1933) starring Warren Williams. Public domain photo

In a letter he wrote in 1884, Mark Twain lamented that ‘Telephones, telegraphs and words are too slow for this age; we must get something that is faster.’ We should (in the future) communicate, he said, ‘by thought only, and say in a couple of minutes what couldn’t be inflated into words in an hour and a-half.’

Fast-forward to 2020, and Elon Musk suggests in an interview that by using his ‘neural net’ technology – a lace-like mesh implanted in the brain – we ‘would, in principle [be] able to communicate very quickly, and with far more precision, ideas and language.’ When asked by his interviewer, Joe Rogan: ‘How many years, before you don’t have to talk?’ Musk responds: ‘If the development continues to accelerate, then maybe, like, five years – five to 10 years.’

Despite the very real progress the previous century brought for our understanding of both language and the brain, we are no closer to telepathy than we were in Twain’s time. The reason, we will argue, is that the telepathy we’ve been promised – the sort envisaged by Twain and Musk, and popularised in countless movies and TV shows – rests on a faulty premise.

‘Good old-fashioned telepathy’ (GOFT) involves a direct transfer of thoughts from one mind to another. It has captivated people for a few reasons. First, it bypasses the limitations and vicissitudes of language. With GOFT, we no longer need to struggle to put each concept into words or to decode someone’s language. This bypassing of language is a central feature of GOFT; it is what licenses science-fiction writers to imagine humans and aliens communicating telepathically despite not sharing a language, culture or biology.

Second, GOFT promises more precise and genuine communication. The ambiguity of language is legion. We all have experiences of saying one thing, only to be understood as saying something else (and those are just the miscommunications we were alerted to!) Because language is so flexible, it is also easy to lie and contradict oneself. These apparent shortcomings have, for centuries, inspired inventions of artificial languages that try to remove ambiguity and duplicity. A direct thought-to-thought transfer would seem the ultimate solution.

Finally, GOFT promises faster communication. Many of us have the intuition that we can think faster than we can speak or write, and that having to rely on language to communicate is an impediment. It is no coincidence that one of the aims of Neuralink, Musk’s neural-interfaces/telepathy start-up, is to allow humans to communicate at the speed of thought.

We are not even certain if Bob’s mental state could be interpreted by Bob himself in a year’s time

At the root of GOFT, however, is a problem. For it to work, our thoughts have to be aligned, to have a common format. Alice’s thoughts beamed into Bob’s brain need to be understandable to Bob. But would they be? To appreciate what real alignment actually entails, consider machine-to-machine communication that takes place when Bob sends an email to Alice. For this seemingly simple act to work, Bob and Alice’s computers have to encode letters in the same way (otherwise an ‘a’ typed by Bob would render as something different for Alice). The protocols used by Bob’s and Alice’s machines for transmitting the information (eg, SMTP, POP) also have to be matched. If that email has an attached photo, additional alignment must exist to ensure that the receiving machine can decode the image format (eg, JPG) used by the sender. It is these formats (known collectively as encodings and protocols) that allow machines to ‘understand’ one another. These formats are the products of deliberate engineering and they required universal buy-in. Just as postal systems around the world had to agree to honour each other’s stamps, companies and governments had to agree to use common encodings such as Unicode and protocols such as TCP/IP and SMTP.

But is there any reason to think that our thoughts are aligned in this way? At present, we have no reason to imagine that the neural activity constituting Bob’s thought – for example, I’m in the mood for some truffle risotto – would make any sense to anyone other than Bob (indeed, we are not even certain if Bob’s mental state could be interpreted by Bob himself in a year’s time). How then does Bob communicate his risotto desires to Alice? The obvious solution is to use a natural language like English. To be useful, these systems have to be learned. But, once learned, they allow us to use a common set of symbols (English words) to token particular thoughts in the minds of other English speakers.

It is tempting to assume that the reason why language works as well as it does is that our thoughts are already aligned and language is just a way of communicating them: our thoughts are ‘packaged’ into words and then ‘unpacked’ by a receiver. But this is an illusion. It is telling that even with natural language, conceptual alignment is hard work and drops off without actively using language.

Natural languages thus accomplish a version of what machine protocols and encodings do – they provide a common protocol that (to some extent) bridges the varied formats of our thoughts. Language on this view does not depend on prior conceptual alignment, it helps create it.

Would it be possible to create alignment between our thoughts? Some way to transforms Bob’s mental state into some form that is compatible with Alice’s or, better yet, with everyone’s thoughts? Let’s consider three possible solutions.

The first is to transform our thoughts into a natural language like English. Rather than beaming raw thoughts from one mind to another, we beam words instead. This could work. But of course everyone involved would need to already a share a language like English, turning telepathy into a fancy form of texting.

The second is to computationally transform raw mental states into some common format – a universally understandable ‘language of thought’. As of right now, there is no reason to think that such a transformation is possible. But it is conceivable to us that such a system can be used to transmit general states – eg, distinguishing Yes! vs Meh… – and perhaps mental images. But we don’t see how this method would work to transmit arbitrary thoughts – a main promise of GOFT.

Is this genuine communication or a somewhat macabre remote-control?

The third is to map specific thoughts to specific meanings in a predetermined way, creating a kind of ‘telepathese’. As it happens, modern attempts at telepathic communication (of which there are now a few) are just such attempts. Let’s take a look at two.

In a study in 2014, a team of researchers lead by the computer scientist Rajesh Rao paired people to jointly play a game, trying to fire a virtual cannon to defend a city from enemy rockets. In each pair, one person (the ‘sender’) could see a screen showing the position of the target but could not fire the cannon. The other person, the ‘receiver’, could not see the screen, but could press the ‘fire’ button. The two players were linked with a brain-to-brain interface created by connecting the sender to an electroencephalograph (EEG) – a device for measuring small voltage fluctuations evoked by brain activity using electrodes placed on the scalp. These voltages were then used to trigger magnetic pulses in a transcranial magnetic stimulation (TMS) machine positioned near the receiver’s scalp. These magnetic pulses, when delivered to the part of the scalp overlying a specific part of the motor cortex, produced muscle contractions that, in this case, caused the receiver to press the ‘fire’ button.

Let’s put aside the question of whether this is genuine communication or a somewhat macabre remote-control. One could imagine a more finessed version in which the magnetic pulse only suggests rather than causes the firing action. But however much we finesse it, the information being exchanged is highly specific, and meaningful only in this specific context after having briefed the sender and receiver (using a natural language) on how the game works. The message being sent through the EEG signal is not a thought or idea. Rather, it is, quite literally, the motor command that would ordinarily drive the sender’s hand muscles to contract.

Is there a way to extend this type of brain-to-brain interface so that it is less tied to a specific game? In a study published the same year, the psychologist Carles Grau and colleagues also coupled a ‘sender’ and a ‘receiver’ using an EEG/TMS rig. Senders were instructed to imagine either moving their hands or their feet. The resulting EEG patterns can be distinguished and used either to trigger a TMS coil to stimulate a receiver’s visual cortex, or to deliver a pulse that did not produce phosphenes. So what we have is a setup where a sender can think a thought (eg, imagining her feet or hands moving), which causes a receiver to perceive or not perceive a phosphene. This method can, in principle, be used to communicate arbitrary information. For example, one can use Morse code: ‘hello’ becomes ……-.. .-.. — (where hand imagery is a dot, and arm imagery is a dash). This is, of course, slow and error-prone, but the real problem is that this too is not GOFT. Although we are now closer to the signals being ‘thoughts’, their meanings need to be prearranged, either falling back on English words (such as brain-to-brain texting using Morse code) or requiring senders and receivers to learn a new protocol, such as associating a particular pattern of on/off signals with a particular object. Here again, we already have such a protocol that we learn in infancy: language. From telepathic communication to telepathic coordination.

We have so far taken a dim view of the possibility of telepathy that assumes our thoughts are aligned. There are some ways of aligning our thoughts, perhaps by training people to use highly specific protocols of the sort used by Grau and colleagues. But by requiring alignment in advance, many of the key benefits of telepathy threaten to be lost. Instead of being able to communicate with people using our thoughts alone, we must first be trained on, essentially, how to be more alike. Instead of gaining a new window on alien ways of thinking and reasoning, this form of telepathy would work only if almost everything was already the same (hence well-aligned).

But perhaps there is hope for telepathy – or something like it – yet. For there is another way of thinking about telepathy that suggests intriguingly different avenues for empirical research and experiment. To see what we have in mind, it helps to take a step back and ask what language is for in the first place. One possibility – the one that seems most in line with our reflections on alignment – is that it is a means of sharing thoughts and information between individuals. But information-sharing is beneficial only insofar as it leads to different actions. This opens up a different way of thinking about language and about the prospects for (a kind of new-fangled) telepathy.

Instead of viewing communication between people as a transfer of information, we can think about it as a series of actions we perform on one another (and often on ourselves) to bring about effects. The goal of language, thus understood, is not (or is not always) alignment of mental representations, but simply the informed coordination of action. On this picture, successful uses of language need not demand conceptual alignment. This view of language as a lever for coordination, a tool for practical action, can be found in research by Andy Clark (2006), Mark Dingmanse (2017), Christopher Gauker (2002) and Michael Reddy (1979).

By way of analogy, consider the notion of ‘inter-operability’ but applied to gross physical abilities. Two people of very different heights and weights can cooperate to move a piece of furniture around some tight corners together. They might even signal to each other along the way. For this to work, the signals need to bring about the right kinds of bodily effect, perhaps pushing at one end or raising the item in the air. But, beyond that, there is no need for conceptual (let alone phenomenal) alignment at all, apart from having a shared goal. Practical alignment is all that matters.

Linked people then carry out various joint projects: they work on school assignments, move couches, fall in love

Viewing language as a lever for practical coordination, the prospects for (something a bit like) telepathy start to look different. Instead of viewing telepathy as potential means to communicate our inner thoughts and experiences, transferring them from one mind to another, we can think of telepathy in terms of new channels of causal influence: channels that might one day be exploited to coordinate joint actions. Existing brain-to-brain interfaces could play this role even if they are congenitally unable (due to the lack of sufficient conceptual alignment) to act as a kind of direct transmitter of the content of one person’s mental representations to another.

With this in mind, imagine now an alternative version of the sender-receiver setups used in Rao’s and Grau’s studies. Instead of instructing people to induce a particular mental state to communicate a predetermined meaning, there is simply a two-way brain-to-brain channel opened up between two or more individuals at a young age. The linked people then carry out various joint projects: they work on school assignments, move couches, fall in love. Might their brains learn to make use of the new channel to help them achieve their goals? This seems (to us, at least) to verge into more plausible territory. Something similar seems to occur when two people, or even a human and a pet, learn to pick up on body language as a clue to what the other person is thinking or intending to do. There, too, a different channel – in this case, vision – with a different target (small bodily motions) conveys an extra layer of useable information – and one not easily replicated by other means.

Any new, initially purposeless, brain-to-brain channel could be variously configured, conveying traces recorded from different neural areas, or averaged across many such areas. It would be a matter of trial and error to discover what kinds of configuration work best, and for what purposes. But the goal of these new bridges would not be to bypass either person’s intentions (as in designs like Rao’s), so much as to enhance the basis upon which they each form and implement their intentions.

To our knowledge, this kind of experiment has never been performed on humans or any other animals. Something like it was imagined, though, by the neuro-philosopher Paul Churchland. In his book A Neurocomputational Perspective (1989), Churchland imagined a hockey team that trained and played with direct wireless brain-to-brain links in place. Such a team might benefit from the very rapid transfer of signals carrying information of many kinds. Perhaps, Churchland speculated, the players would learn ways of understanding each other that were far superior to those made possible by normal linguistic communication. This is because he regarded public language as a limited and impoverished means of communication – one whose job might be done much better by some form of direct brain-to-brain link. Our view, by contrast, is that the power both of public language and of any future brain-to-brain bridging lies in their ability to act as levers for joint action, while papering over differences in underlying representational spaces.

Importantly, there is reason to think that human brains possess the kinds of flexibility and plasticity required to make good use of new kinds of channel, and/or of channels carrying new kinds of information. A simple example is the NorthSense – a small silicon device that is attached to the chest and delivers a short vibration when the user is turned towards magnetic North. Users report quite rapidly starting to just ‘know’, moment by moment, their orientation relative to important distant places such as their home or their children’s school-gates. In this way, a constant drip-feed of new directional information is rapidly assimilated into the cognitive ecology of the wearer.

Or consider sensory substitution technologies. A blind person’s cane delivers a stream of information that can be used to aid object-identification and localisation. But for a higher bandwidth experience, a head-mounted camera linked to an electrical grid attached to the tongue can deliver patterns of electrical stimulation that bear information about the distance and shape of out-of-reach objects: information that can be used to drive object-recognition and apt action. There are also commercially available systems that deliver visual information using patterns of sound rather than touch – for example, devices such as EyeMusic. In all these cases, subjects attempt to perform various actions and, as they do so, the resulting video feeds are translated into touch, electrical stimulations, or sound. With time and practice, it is possible to learn the signature patterns characteristic of different encountered objects, distinguishing plants from statues, crosses from circles, and so on.

Such technologies remain limited in their scope and require extensive training to master. But they are an important proof of principle, nonetheless. Human brains are plastic organs able to make use of information-bearing signals of many kinds. Our standard human repertoire of sensing may be simply the starter-pack for our eventual modes of contact, both with other people and with the wider world.

Viewed in this way, it may be productive to think about telepathy as being less about learning a new language and more like learning a new motor skill, such as juggling, or perhaps something more sophisticated, like learning to dance or riding a trial bike. In such cases, the right kind of practice lets us do something radically new, expanding our usual repertoire in ways whose best uses might be discovered much later.

We have argued that the prospects for good old-fashioned telepathy are poor. GOFT requires our thoughts to have a common format, such that the thought of one person is understandable to another. The chances that such a format exists are remote. And trying to establish it by using natural language largely defeats the purpose of telepathy, turning it into little more than fancy texting.

But despite our pessimism regarding the direct transmission of thoughts or experiences, the prospect of adding new direct brain-to-brain channels is an exciting one. By providing multiple new channels of this kind, our plastic brains may be ‘let loose’ to discover new and potent ways to coordinate practical actions. Our current accomplishments in art, science and culture required the efficient coordination made possible by natural language. New brain-to-brain channels have the potential to augment those existing capabilities, turning us into super-cooperators, and transforming life and society in ways we cannot yet imagine.