Menu
Aeon
DonateNewsletter
SIGN IN

Photo by Richard Kalvar/Magnum Photos

i

The space between our heads

Brain-to-brain interfaces promise to bypass language. But do we really want access to one another’s unmediated thoughts?

by Mark Dingemanse + BIO

Photo by Richard Kalvar/Magnum Photos

In a nondescript building in Seattle, a man sits strapped to a chair with his right hand resting on a touchpad. Pressed against his skull is a large magnetic coil that can induce an electrical current in the brain, a technique known as transcranial magnetic stimulation. The coil is positioned in such a way that a pulse will result in a hand movement. A mile away in another building, another man looks at a screen while 64 electrodes in a shower cap record his brain activity using electro-encephalography. Rough activation patterns are fed back to the computer so that he, by concentrating, can move a dot a small distance on the screen. As he focuses, a simple signal derived from the brain activity is transmitted to the first building, where another computer tells the magnetic coil to deliver its pulse. The first man’s hand jolts upward, then falls down on the touchpad, where the input is registered as a move in a video game. Then a cannon is fired and a city is saved – by two bodies acting as one.

As gameplay goes, the result might seem modest, but it has far-reaching implications for human interaction – at least if we believe the team of scientists at the University of Washington led by the computer scientist Rajesh Rao who ran this experiment. This is one of the first prototypes of brain-to-brain interfaces in humans. From the sender’s motionless concentration to the receiver’s involuntary twitch, they form a single distributed system, connected by wires instead of words. ‘Can information that is available in the brain be transferred directly in the form of the neural code, bypassing language altogether?’ the scientists wondered in writing up the results. A Barcelona team reached a similar result with people as far apart as India and France. With a gush of anticipation, they exclaim: ‘There is now the possibility of a new era in which brains will dialogue in a more direct way.’

The popular media has been quick to jump on the bandwagon as the prototypes make global headlines. Big Think declared brain-to-brain interfaces ‘the next great leap in human communication’. The tech entrepreneur Elon Musk speculated about how a neural prosthetic to be made by one of his own companies might ‘solve the data rate issue’ of human communication. The idea is that, given high bandwidth physical connectivity, language will simply become obsolete. Will we finally be able to escape the tyranny of words and enjoy the instant sharing of ideas?

Let’s face it: we’ve all had second thoughts about language. Hardly a day goes by when we don’t stumble for words, stagger into misunderstandings, or struggle with a double negative. It’s a frightfully cumbersome way to express ourselves. If language is such a slippery medium, perhaps it is time to replace it with something more dependable. Why not cut out the middleman and connect brains directly? The idea is not new. As the American physicist and Nobel laureate Murray Gell-Mann mused in The Quark and the Jaguar (1994): ‘Thoughts and feelings would be completely shared, with none of the selectivity or deception that language permits.’

It is useful to examine this view of language carefully, for it is quite alluring. Rao and his team complain about how hard it can be to verbalise feelings or forms of knowledge even if they are introspectively available. On Twitter, Musk has described words as ‘a very lossy compression of thought’. How frustrating to have such a rich mental life and be stuck with such poor resources for expressing it! But no matter how much we can sympathise with this view, it misses a few crucial insights about language. First, words are tools. They can be misplaced or misused like any tool, but they are often useful for what they’ve been designed to do: help us say just what we want to say, and no more. When we choose our words carefully, it is because we know that there is a difference between private worlds and public words. There had better be, since social life depends on it.

Second, and more subtly, this view sees language as merely a channel for information: just as the speaking tube has made way for the telephone, so language can be done away with if we connect brains directly. This overlooks that language is also an infrastructure for social action. Think of everyday conversations, in which we riff off on a theme, recruit others to do stuff, relate to those around us. We don’t just spout information indiscriminately; we apportion our words in conversational turns and build on each others’ contributions. Language in everyday use is less like a channel and more like a tango: a fluid interplay of moves in which people can act as one, yet also retain their individuality. In social interaction there is room, by design, for consent and dissent.

The difference with current concepts of brain-to-brain interfaces couldn’t be greater. A transcranial magnetic pulse leaves no room for doubt, but none for deliberation either. Its effect is as immediate as it is involuntary. We can admire the sheer efficiency of this form of interaction, but we also have to admit that something is lost. A sense of agency and autonomy; and along with that, perhaps even a sense of self. Nor does this problem go away merely by upgrading bandwidth, as is Musk’s ambition for Neuralink, his implantable brain-computer interface. The very possibility of social (as opposed to merely symbiotic) life depends on there being some separation of private worlds, along with powers to interact on our own terms. In other words, we need something like language in order to be human.

When we directly connect one individual’s mental life to that of another, individual agency might slip through our fingers. Biology offers plenty of examples. Take the fascinating slime mould Physarum polycephalum, which is essentially a bag of cytoplasm holding millions of individual nuclei, the result of a mass merger of amoebae. Moving and sensing in unison, the slime mould can crawl towards light, find food in mazes, and even mimic the design of urban metro networks. The price for this perfect symbiosis is a complete loss of autonomy for individual elements. The real challenge for brain-to-brain interfaces is not to achieve some interlinking of brains. It is to harness technology in a way that doesn’t reduce people to the level of amoebae fused to a slime mould.

Nobody in their right mind wants to blurt out every fleeting thought and feeling

If I ask you to help me move a large sofa, I effectively recruit you as an ‘instrument’ for carrying out a joint action. If you agree, I can give you directions – move this way; no, up here; perfect – and together we accomplish something neither of us could do alone. Now think about what this means for agency. For the duration of our project, you agree to give up a bit of your personal agency, and together we become what the British philosopher Margaret Gilbert has called a plural subject: a larger social unit of agency. Whereas joint agency in the brain-to-brain prototypes (or indeed the slime mould) comes about by brute-force physical means, here it is negotiated using language.

One amazing thing about language is the sheer fluidity with which it allows us to manage such everyday episodes of joining forces and parting ways. It is literally the most versatile brain-to-brain interface we have: a nimble, negotiable system that enables people with separate bodies to achieve joint agency without giving up behavioural flexibility and social accountability. So before we throw out language because of its supposedly low data rate, let’s look a bit more closely at the ways in which it helps us calibrate minds, coordinate bodies and distribute agency. There are two features of language that make it especially useful in human interaction: selection and negotiation.

Selection is the power we have to select what to keep private and what to make public by putting it into words. Sharing and withholding information are among the most important ways in which we manage our social relationships. Nobody in their right mind wants to blurt out every fleeting thought and feeling. Society as we know it largely depends on the fact that some things are better left unsaid. So language gives us control over what we share with whom, and whether we share something in the first place. Take a simple question such as ‘How are you?’ The words you select in response to it have more to do with social relations than with information: this is how you distinguish between the delivery driver and your best friend. This is the power of selection in action.

If we weren’t judiciously wielding the power of selection throughout the day, awkwardness would accumulate and social life would be sent into gridlock. To avoid this, we have, as a society, tacitly agreed to limit self-disclosure. In the words of the American sociologist Harvey Sacks, ‘everyone has to lie’. This is not deception of the kind that Gell-Mann deplored; instead, it is a necessary form of economy with the truth. Those who don’t heed this – the blabbermouths and oversharers – tend to pay for it socially (and sometimes also economically, if they are, say, a CEO on Twitter). On balance, selectivity is a small price to pay for the possibility of a normal social life.

Selection is quite powerful in another way. Language has a fractal quality: things can be described in myriad compatible yet nonequivalent ways. Think back to helping me move the sofa at my place. If we know each other, ‘my place’ is all you need to know, and you would be puzzled if I transmitted GPS coordinates, even if those might provide a more objective and explicit localisation. My selecting one formulation over others neatly accomplishes two things at once: providing relevant information and indexing our social relation. Describing this as lossy compression misses the point. It is more like distributed computation: the power of language is that we can be frugal with words when possible and explicit when necessary.

The second key power that language gives us is negotiation, in the sense of working together towards mutual understanding. We take turns at talking and this provides every participant in a conversation with systematic opportunities for consent and dissent. We can use our turns at talk for the normal business of conversation, such as answering a question, telling a story or recruiting assistance and collaboration. We can also use our turns to do metacommunicative work, for instance to signal that we are on the same page or to ask for clarification. Often these metacommunicative signals – such as ‘m-hm’ or ‘huh?’ – are rather minimal and unobtrusive: they seem to be optimally adapted to the task of streamlining conversations.

But isn’t asking for clarification exactly the kind of annoyance that brain-to-brain interfaces could help us get rid of? Especially since it happens so often: our best current estimate puts it at roughly every 1.4 minutes on average in informal conversations around the world. Wouldn’t it be much more efficient if we immediately understood each others’ wants and needs, instead of getting into knots about what we mean? Here we reach a fork in the road. If you find yourself wanting to eradicate all those pesky misunderstandings once and for all, your view of meaning is primarily individualist. You feel that you’ve made up your mind and simply want the other to get it. If only you could just beam the message across without ambiguities.

There is, though, another way of thinking about meaning in interaction. It is less single-minded and more dialogical. It holds that, often, we figure out our wants and needs in interaction with others. That what we ‘really’ mean often becomes clear to us only when we work it out together. There is a fitting Zulu saying here: Umuntu ngumuntu ngabantu, or ‘A person is a person through other people.’ This captures a subtle balance between individual and community. We are never sole individuals; nor are we mere cells in a slime mould. We are social beings first and foremost.

Work on conversational clarification shows that low-level perceptual problems – of the kind that might be targeted by enhanced communication channels – are relatively rare. Often, we ask for clarification not so much because we didn’t hear or understand, but to make up our mind, buy ourselves some time, or give the other a chance to reformulate. Even if things don’t work out, we can agree to disagree, preventing a loss of face for us both. The metacommunicative signals that litter our conversations are not symptoms of a superseded technology. They are scaffolds: signs that help us think and talk, interactional tools that help us come to terms. All this means that communication is never a one-shot affair, and that there are always opportunities to hold each other to account, reconsider our positions, and renegotiate our commitments.

The hallmark of religious indoctrination is that it leaves no wiggle room in interpreting the words of holy books

People have separate bodies. While brain-to-brain interfaces could somewhat dilute this separateness, language has long bridged it. Never merely individuals, we are always part of a dazzling range of social units, some fleeting and diffuse (like the unit of ‘readers of this essay’) and others more strong and durable (like close friends and family). Language is the main tool by which we navigate this mosaic of social relations, constantly switching frames between ‘me’ and the many different senses of ‘us’. Seen in this light, selection and negotiation are not bugs, but features. Thanks to them, we can manage what we share with whom, and we can join forces in larger social units without indefinitely relinquishing individual agency.

We don’t need experimental prototypes to see what happens when the powers of selection and negotiation are diminished or taken away. In George Orwell’s novel Nineteen Eighty-Four (1949), language is stripped down to eliminate ambiguity: a laudable goal on the face of it, but one that has the uncanny side-effect of dramatically reducing opportunities for dissent. Likewise, the hallmark of religious indoctrination is that it leaves no wiggle room in interpreting the words of holy books or great leaders. Sometimes precision gained is freedom lost.

Science-fiction scenarios contemplating brain-to-brain coupling press home some of the ethical issues in a very clear way. An early episode of Star Trek introduced the mind meld, in which Vulcans coercively share thoughts and experiences through mere physical contact. Later seasons introduced the Borg collective, an ever-expanding hive mind that grows through forced assimilation. In William Gibson’s novel Neuromancer (1984), the lure of cyberspace creates a sprawling market for experimental neurosurgery, while jacking into the matrix risks exposing one’s mental life to malicious hackers. Surprisingly, most current brain-to-brain interface prototypes follow the script: senders don’t get to choose which aspects of their brain activity are transmitted, and receivers have no freedom to deliberate over incoming signals. This is more like physical manipulation than free communication. It is interaction stripped of any possibilities for selection and negotiation.

What unites violations of mental and bodily integrity is that crucial aspects of individual agency are taken away. It’s no coincidence that we describe them as ‘dehumanising’ and ‘inhumane’. Language is what makes us human. It is not merely a conduit for information, it is also our way of organising social agency. We might try exchanging it for a high-bandwidth physical connection to optimise the flow of some types of information, but we would do so at the tremendous cost of throwing away the very infrastructure that makes human sociality possible.

But there is hope. The agency-distributing powers of interactive language are largely independent of its modality: they are certainly not limited to the spoken, face-to-face version of language that happens to be its most prevalent form today. Writing, for one, shows that at least some aspects of language can be reduced to a visual code, though it has long come with a loss of immediacy and interactivity that is only now being remedied. A better example is the veritable diversity of sign languages used by deaf communities across the globe: impressive proof that the full richness and complexity of interactive language can be realised without a single sound.

Meanwhile, the Washington scientists are already building more refined brain-to-brain interfaces. A recent prototype called BrainNet introduces a simple form of negotiation, essentially by building in an additional loop that allows subjects called ‘Senders’ to see the effect of a Receiver’s choice and, if necessary, re-send a signal. This is directly equivalent to what conversation analysts call ‘third position repair’: a redoing produced by a Sender when a Receiver’s second move reveals a misunderstanding of the Sender’s first move. (The similarity points to rich opportunities for interdisciplinary collaboration in the area of human interaction.)

Adding a feedback opportunity like that seems a simple design choice, but it radically changes the nature of the system. By opening up the Receiver’s actions to semi-public scrutiny, it enables a rudimentary form of negotiation and so points towards exactly the kind of back-and-forth that makes natural languages so flexible and error-robust. As input-output systems become more versatile and allow higher data rates, we can expect similar advances on the selection front, allowing us to select a wider range of signals than just binary choices or cursor movements. Of course, this wider range of choices inevitably also implies more degrees of freedom in interpretation, more room for ambiguity, and a greater need for quick ways to calibrate understandings. Which brings us full circle to something like language: a sophisticated intermediary that combines the powers of selection and negotiation. Soon enough we would rediscover the uses of ambiguity, and the joy of finishing each others’ sentences.

Language is a filter between the private and the public

The conclusion is as paradoxical as optimistic. When we refine brain-to-brain interfaces to increase their potential for collaboration, we find language – the very thing we were trying to bypass – slipping through the back door. It is likely, then, that there will be some language-like system for communication and coordination even in the substrate of brain-to-brain interfaces. Whatever the precise form or modality, truly humane brain-to-brain interfaces will need to give us two things: selection (control over the relation between private worlds and public words) and negotiation (systematic opportunities for calibrating mutual understanding).

Douglas Adams’s The Hitchhiker’s Guide to the Galaxy (1979) records the case of the Belcerebon people of Kakrafoon, who were inflicted by a galactic tribunal with ‘that most cruel of all social diseases, telepathy’. It was a punishment with unforeseen consequences:

in order to prevent themselves broadcasting every slightest thought that crosses their minds … they now have to talk very loudly and continuously about the weather, their little aches and pains, the match this afternoon, and what a noisy place Kakrafoon has suddenly become.

In seeking to bypass language, current conceptions of brain-to-brain interfaces seem to be on their way to replicating the fate of the Belcerebon people. In contrast, the human condition is enabled by a flexible communication system that saves us from the uninhibited sharing of private processes while still helping us to collaborate in ways unmatched elsewhere in the animal kingdom. Language is a filter between the private and the public, and an infrastructure for negotiating consent and dissent. As research into brain-to-brain interfaces matures, let’s make sure to incorporate the powers of selection and negotiation, so as to extend human agency in meaningful ways.

Portions of this essay are revised from the author’s chapter in Distributed Agency (2017), edited by N J Enfield and Paul Kockelman.