Menu
Aeon
DonateNewsletter
SIGN IN
Photo of a pink mug with “lol laugh out loud” printed in white. The background features colourful, stacked objects.

Photo by Reuters/Chris Helgren

i

I type, therefore I am

More human beings can write and type their every thought than ever before. Something to celebrate or deplore?

by Tom Chatfield + BIO

Photo by Reuters/Chris Helgren

At some point in the past two million years, give or take half a million, the genus of great apes that would become modern humans crossed a unique threshold. Across unknowable reaches of time, they developed a communication system able to describe not only the world, but the inner lives of its speakers. They ascended — or fell, depending on your preferred metaphor — into language.

The vast bulk of that story is silence. Indeed, darkness and silence are the defining norms of human history. The earliest known writing probably emerged in southern Mesopotamia around 5,000 years ago but, for most of recorded history, reading and writing remained among the most elite human activities: the province of monarchs, priests and nobles who reserved for themselves the privilege of lasting words.

Mass literacy is a phenomenon of the past few centuries, and one that has reached the majority of the world’s adult population only within the past 75 years. In 1950, UNESCO estimated that 44 per cent of the people in the world aged 15 and over were illiterate; by 2012, that proportion had reduced to just 16 per cent, despite the trebling of the global population between those dates. However, while the full effects of this revolution continue to unfold, we find ourselves in the throes of another whose statistics are still more accelerated.

In the past few decades, more than six billion mobile phones and two billion internet-connected computers have come into the world. As a result of this, for the first time ever we live not only in an era of mass literacy, but also — thanks to the act of typing onto screens ­— in one of mass participation in written culture.

As a medium, electronic screens possess infinite capacities and instant interconnections, turning words into a new kind of active agent in the world. The 21st century is a truly hypertextual arena (hyper from ancient Greek meaning ‘over, beyond, overmuch, above measure’). Digital words are interconnected by active links, as they never have and never could be on the physical page. They are, however, also above measure in their supply, their distribution, and in the stories that they tell.

Just look at the ways in which most of us, every day, use computers, mobile phones, websites, email and social networks. Vast volumes of mixed media surround us, from music to games and videos. Yet almost all of our online actions still begin and end with writing: text messages, status updates, typed search queries, comments and responses, screens packed with verbal exchanges and, underpinning it all, countless billions of words.

This sheer quantity is in itself something new. All future histories of modern language will be written from a position of explicit and overwhelming information — a story not of darkness and silence but of data, and of the verbal outpourings of billions of lives. Where once words were written by the literate few on behalf of the many, now every phone and computer user is an author of some kind. And — separated from human voices — the tasks to which typed language, or visual language, is being put are steadily multiplying.

Consider the story of one of the information age’s minor icons, the emoticon. In 1982, at Carnegie Mellon University, a group of researchers were using an online bulletin board to discuss the hypothetical fate of a drop of mercury left on the floor of an elevator if its cable snapped. The scenario prompted a humorous response from one participant — ‘WARNING! Because of a recent physics experiment, the leftmost elevator has been contaminated with mercury. There is also some slight fire damage’ — followed by a note from someone else that, to a casual reader who hadn’t been following the thread, this comment might seem alarming (‘yelling fire in a crowded theatre is bad news… so are jokes on day-old comments’).

Participants thus began to suggest symbols that could be added to a post intended as a joke, ranging from per cent signs to ampersands and hashtags. The clear winner came from the computer scientist Scott Fahlman, who proposed a smiley face drawn with three punctuation marks to denote a joke :-). Fahlman also typed a matching sad face :-( to suggest seriousness, accompanied by the prophetic note that ‘it is probably more economical to mark things that are NOT jokes, given current trends’.

Within months, dozens of smiley variants were creeping across the early internet: a kind of proto-virality that has led some to label emoticons the ‘first online meme’. What Fahlman and his colleagues had also enshrined was a central fact of online communication: in an interactive medium, consequences rebound and multiply in unforeseen ways, while miscommunication will often become the rule rather than the exception.

Three decades later, we’re faced with the logical conclusion of this trend: an appeal at the High Court in London last year against the conviction of a man for a ‘message of menacing character’ on Twitter. In January 2010, Paul Chambers, 28, had tweeted his frustration at the closure of an airport near Doncaster due to snow: ‘Crap! Robin Hood Airport is closed. You’ve got a week and a bit to get your shit together, otherwise I’m blowing the airport sky high!!’

Chambers had said he never thought anyone would take his ‘silly joke’ seriously. And in his judgment on the ‘Twitter joke trial’, the Lord Chief Justice said that — despite the omission of a smiley emoticon — the tweet in question did not constitute a credible threat: ‘although it purports to address “you”, meaning those responsible for the airport, it was not sent to anyone at the airport or anyone responsible for airport security… the language and punctuation are inconsistent with the writer intending it to be or to be taken as a serious warning’.

The phrase a ‘victory for common sense’ was widely used by supporters of the charged man, such as the comedians Stephen Fry and Al Murray. As the judge also noted, Twitter itself represents ‘no more and no less than conversation without speech’: an interaction as spontaneous and layered with contingent meanings as face-to-face communication, but possessing the permanence of writing and the reach of broadcasting.

It’s an observation that speaks to a central contemporary fact. Our screens are in increasingly direct competition with spoken words themselves — and with traditional conceptions of our relationship with language. Who would have thought, 30 years ago, that a text message of 160 characters or fewer, sent between mobile phones, would become one of the defining communications technologies of the early 21st century; or that one of its natural successors would be a tweet some 20 characters shorter?

Yet this bare textual minimum has proved to be the perfect match to an age of information suffusion: a manageable space that conceals as much as it reveals. Small wonder that the average American teenager now sends and receives around 3,000 text messages a month — or that, as the MIT professor Sherry Turkle reports in her book Alone Together (2011), crafting the perfect kind of flirtatious message is so serious a skill that some teens will outsource it to the most eloquent of their peers.

Almost without our noticing, we weave worlds from these snapshots, until an illusion of unbroken narrative emerges

It’s not just texting, of course. In Asia, so-called ‘chat apps’ are re-enacting many millions of times each day the kind of exchanges that began on bulletin boards in the 1980s, complete not only with animated emoticons but with integrated access to games, online marketplaces, and even video calls. Phone calls, though, are a degree of self-exposure too much for most everyday communications. According to the article ‘On the Death of the Phone Call’ by Clive Thompson, published in Wired magazine in 2010, ‘the average number of mobile phone calls we make is dropping every year… And our calls are getting shorter: in 2005 they averaged three minutes in length; now they’re almost half that.’ Safe behind our screens, we let type do our talking for us — and leave others to conjure our lives by reading between the lines.

Yet written communication doesn’t necessarily mean safer communication. All interactions, be they spoken or written, are to some degree performative: a negotiation of roles and references. Onscreen words are a special species of self-presentation — a form of storytelling in which the very idea of ‘us’ is a fiction crafted letter by letter. Such are our linguistic gifts that a few sentences can conjure the story of a life: a status update, an email, a few text messages. Almost without our noticing, we weave worlds from these snapshots, until an illusion of unbroken narrative emerges from a handful of paragraphs.

Behind this illusion lurks another layer of belief: that we can control these second selves. Yet, ironically, control is one of the first things our eloquence sacrifices. As authors and politicians have long known, the afterlife of our words belongs to the world — and what it chooses to make of them has little to do with our own assumptions.

In many ways, mass articulacy is a crisis of originality. Something always implicit has become ever more starkly explicit: that words and ideas do not belong only to us, but play out without larger currents of human feeling. There is no such thing as a private language. We speak in order to be heard, we write in order to be read. But words also speak through us and, sometimes, are as much a dissolution as an assertion of our identity.

In his essay ‘Writing: or, the Pattern Between People’ (1932), W H Auden touched on the paradoxical relationship between the flow of written words and their ability to satisfy those using them:

Since the underlying reason for writing is to bridge the gulf between one person and another, as the sense of loneliness increases, more and more books are written by more and more people, most of them with little or no talent. Forests are cut down, rivers of ink absorbed, but the lust to write is still unsatisfied.

Onscreen, today’s torrents of pixels exceed anything Auden could have imagined. Yet the hyper-verbal loneliness he evoked feels peculiarly contemporary. Increasingly, we interweave our actions and our rolling digital accounts of ourselves: curators and narrators of our life stories, with a matching move from internal to external monologue. It’s a realm of elaborate shows in which status is hugely significant — and one in which articulacy itself risks turning into a game, with attention and impact (retweets, likes) held up as the supreme virtues of self-expression.

Consider the particular phenomenon known as binary or ‘reversible language’ that now proliferates online. It might sound obscure, but the pairings it entails are central to most modern metrics of measured attention, influence and interconnection: to ‘like’ and to ‘unlike’, to ‘favourite’ and to ‘unfavourite’; to ‘follow’ and ‘unfollow’; to ‘friend’ and ‘unfriend’; or simply to ‘click’ or ‘unclick’ the onscreen boxes enabling all of the above.

Ours is the first epoch of the articulate crowd, the smart mob: of words and deeds fused into ceaseless feedback

Like the systems of organisation underpinning it, such language promises a clean and quantifiable recasting of self-expression and relationships. At every stage, both you and your audience have precise access to a measure of reception: the number of likes a link has received, the number of followers endorsing a tweeter, the items ticked or unticked to populate your profile with a galaxy of preferences.

What’s on offer is a kind of perpetual present, in which everything can always be exactly the way you want it to be (provided you feel one of two ways). Everything can be undone instantly and effortlessly, then done again at will, while the machinery itself can be shut down, logged off or ignored. Like the author oscillating between Ctrl-Y (redo) and Ctrl-Z (undo) on a keyboard, a hundred indecisions, visions and revisions are permitted — if desired — and all will remain unseen. There is no need, ever, for any conversation to end.

Even the most ephemeral online act leaves its mark. Data only accumulates. Little that is online is ever forgotten or erased, while the business of search and social recommendation funnels our words into a perpetual popularity contest. Every act of selection and interconnection is another reinforcement. If you can’t find something online, it’s often because you lack the right words. And there’s a deliciously circular logic to all this, whereby what’s ‘right’ means only what displays the best search results — just as what you yourself are ‘like’ is defined by the boxes you’ve ticked. It’s a grand game with the most glittering prizes of all at stake: connection, recognition, self-expression, discovery. The internet’s countless servers and services are the perfect riposte to history: an eternally unfinished collaboration, pooling the words of many millions; a final refuge from darkness.

There’s much to celebrate in this profligate democracy, and its overthrow of articulate monopolies. The self-dramatising ingenuity behind even three letters such as ‘LOL’ is a testament to our capacity for making the most constricted verbal arenas our own, while to watch events unfold through the fractal lens of social media is a unique contemporary privilege. Ours is the first epoch of the articulate crowd, the smart mob: of words and deeds fused into ceaseless feedback.

Yet language is a bewitchment that can overturn itself — and can, like all our creations, convince us there is nothing beyond it. In an era when the gulf between words and world has never been easier to overlook, it’s essential to keep alive a sense of ourselves as distinct from the cascade of self-expression; to push back against the torrents of articulacy flowing past and through us.

For the philosopher John Gray, writing in The Silence of Animals (2013), the struggle with words and meanings is sometimes simply a distraction:

Philosophers will say that humans can never be silent because the mind is made of words. For these half-witted logicians, silence is no more than a word. To overcome language by means of language is obviously impossible. Turning within, you will find only words and images that are parts of yourself. But if you turn outside yourself — to the birds and animals and the quickly changing places where they live — you may hear something beyond words.

Gray’s dismissal of ‘half-witted logicians’ might be a sober tonic, yet it’s something I find extraordinarily hopeful — an exit from the despairing circularity that expects our creations either to damn or to save us. If we cannot speak ourselves into being, we cannot speak ourselves out of being either. We are, in another fine philosophical phrase, condemned to be free. And this freedom is not contingent on eloquence, no matter how desperately we might wish that words alone could negotiate the world on our behalf.