Today, depending on your favoured futurist prophet, a kind of digital Elysium awaits us all. Over millennia, we have managed to unshackle ourselves from the burdens of time and space — from heat, cold, hunger, thirst, physical distance, mechanical effort — along a trajectory seemingly aimed at abstraction. Humanity’s collective consciousness is to be uploaded into the super-Matrix of the near future — or augmented into cyborg immortality, or out-evolved by self-aware machine minds. Whatever happens, the very meat of our physical being is to be left behind.
Except, of course, so far we remain thorougly embodied. Flesh and blood. There is just us, slumped in our chairs, at our desks, inside our cars, stroking our smartphones and tablets. Peel back the layers of illusion, and what remains is not a brain in a jar — however much we might fear or hunger for this — but a brain within a body, as remorselessly obedient to that body’s urges and limitations as any paleolithic hunter-gatherer.
It’s a point that has been emphasised by much recent research into thought and behaviour. To quote from Thinking, Fast and Slow (2011) by Nobel laureate Daniel Kahneman, ‘cognition is embodied; you think with your body, not only with your brain’. Yet when it comes to culture’s cutting edge, there remains an overwhelming tendency to treat embodiment not as a central condition of being human that our tools ought to serve, but rather as an inconvenience to be eliminated.
One of my favourite accounts of our genius for unreality is a passage from the David Foster Wallace essay ‘E Unibus Pluram: Television and US Fiction’ (1990), in which he describes, with escalating incredulity, the layers of illusion involved in watching television.
First comes the artifice of performance. ‘Illusion (1) is that we’re voyeurs here at all,’ he writes, ‘the “voyees” behind the screen’s glass are only pretending ignorance. They know perfectly well we’re out there.’ Then there’s the capturing of these performances, ‘the second layer of glass, the lenses and monitors via which technicians and arrangers apply ingenuity to hurl the visible images at us’. And then there are the nestled layers of artificiality involved in scripting, devising and selling the scenarios to be filmed, which aren’t ‘people in real situations that do or even could go on without consciousness of Audience’.
After this comes the actual screen that we’re looking at: not what it appears to show, but its physical reality in ‘analog waves and ionised streams and rear-screen chemical reactions throwing off phosphenes in grids of dots not much more lifelike than Seurat’s own impressionist “statements” on perceptual illusion’.
But even this is only the warm-up. Because — ‘Good lord’ he exclaims in climax — ‘the dots are coming out of our furniture, all we’re really spying on is our furniture; and our very own chairs and lamps and bookspines sit visible but unseen at our gaze’s frame…’
There’s a certain awe at our capacity for self-deception, here — if ‘deception’ is the right word for the chosen, crafted unrealities in play. But Foster Wallace’s ‘good lord’ is also a cry of awakening into uncomfortable truth.
We are metaphorically dismembered by our tools: regarded by the sites and services we visit as ‘eyeballs’, as tapping and touching fingertips
It reminds me of the scene in the film The Matrix (1999) in which Neo has to decide between taking the blue pill that will preserve his illusions, and the red pill that will reveal what his world actually looks like. He swallows the red pill, gulps a glass of water, and is led into another room. Nothing happens, until he reaches out to touch a mirror. Its surface shivers, sticks to his hand, then begins to flow over his skin like liquid cement, rising along his arm and down his throat. Choking, he screams — and wakes up somewhere else, naked, bald, gasping for air inside a cocoon filled with fluid.
It’s the perfect contemporary depiction of an atavistic fear: that the world around us is a lie. However, The Matrix is also a suitably ambivalent fable for modern times — because its lies aren’t supernatural tricks, but the apotheosis of human ingenuity. And the problem isn’t so much illusion itself as who’s in charge. The baddies here are the evil machines. But so long as we’re the ones running the show, it’s sunglasses, guns, and anti-gravity kung fu all the way, which is an infinitely more enticing destiny than unenhanced actuality.
What the red pill promises isn’t actually the real world at all. It’s the Matrix as it ought to be, knowingly bent to serve our desires: a dream of omnipotence through disembodiment.
In a 2012 essay, under the delightful title ‘Arsebestos’, the American science fiction author Neal Stephenson rails against one particular aspect of contemporary contempt for the body: laziness. ‘Ergonomic swivel chairs,’ the essay argues, ‘are the next asbestos’. That is, our sedentary screen-staring habits are as great a lurking hazard for the 21st century as asbestos was for the 20th. The point, for Stephenson, is simple — ‘the reaper comes first for those who sit’ — as is the path we took there. ‘First, we all bought in to the idea that a normal job involved sitting in a chair, and then we found ourselves imprisoned by our own furniture…’
Once again, furniture is the foe. Equipped with increasingly smart digital systems, we now perform an entirely on-screen, virtual version of many hundreds of daily acts that used to take us out of our chairs and around the house, office or neighbourhood:
It used to be that reading the mail required walking to the mailbox, slicing open envelopes, and other small but real physical exertions. Now we do it by twitching our fingers. Similar remarks could be made about talking on the phone (now replaced by Skype), filing or throwing away documents (now a matter of dragging icons around or, if that’s too strenuous, using command-key combinations), watching television (YouTube), and meeting with co-workers (videoconferencing).
Stephenson, who today does most of his work strolling at a steady pace at a treadmill desk, is making a point about the act of sitting itself: that too much of it is harmful and that, in an age of ever-more-nimble computing, it’s absurd for us to sit around all day staring at screens. Leaving aside the irony of an author known for his pioneering depictions of virtual worlds acting as a light-exercise guru, it’s sensible advice. For me, though, this is also a point about how we conceive of the relationship between ourselves and our tools.
We think, feel and work better when we’re at least a little mobile; we have better blood chemistry and concentration; we’re more creative and energetic, not to mention less prone to all manners of malaise. Why, then, is sedentary ease quite so attractive — even addictive? The answer lies in the interlocked vast systems and assumptions of which our furniture is but the visible tip.
At the start of the 1990s, screens — whether televisions or computers, deployed for work or leisure — were bulky, static objects. For those on the move and lucky enough to employ a weight-lifting personal assistant, a Macintosh ‘portable’ cost $6,500 and weighed 7.2 kilos (close to 16 lbs). For everyone else, computing was a crude, solitary domain, inaccessible to anyone other than aficionados.
Today, just two decades on from Foster Wallace’s ‘E Unibus Pluram’, we inhabit an age of extraordinary intimacy with screen-based technologies. As well as our home and office computers, and the 40-inch-plus glories of our living room screens, few of us are now without the tactile, constant presence of at least one smart device in our pocket or bag.
These are tools that can feel more like extensions of ourselves than separate devices: the first thing we touch when we wake up in the morning, the last thing we touch before going to bed at night. Yet what they offer is a curious kind of intimacy — and the ‘us’ to which all this is addressed doesn’t often look or feel much like a living, breathing human being.
What’s on offer is, effectively, a smartphone strapped to my face
Instead, we are metaphorically dismembered by our tools: regarded by the sites and services we visit as ‘eyeballs’, as tapping and touching fingertips on keyboards and screens, as attention spans to be harnessed and data-rich profiles to be harvested. So far as most screens are concerned, we exist only in order to be transfixed by their gaze.
It’s as if we’ve mistaken a particular, contingent set of historical circumstances — that screens used to be extremely heavy, and the only way to use them was to sit down for an extended period of time — for a truth about human nature. Most of us work at desks in offices that wouldn’t look too strange to 18th-century clerks, and spend our leisure gazing at vast wall-mounted monitors while cradling second screens in the palms of our hands.
And it would be amusing if it weren’t so insidious: in public places, at work in a room full of colleagues, in our homes, our favourite activity remains hanging out with furniture.
There are, of course, those who seem to be trying to set us free from these shackles that make, or encourage us to be so indolent. Take one of the most futuristic pieces of kit to hit headlines in recent years: ‘Google Glass’, which contains a camera, microphone, internet connection, head-up-display and touchpad — all housed within a miraculously sleek pair of spectacles. The launch event last year was a frenzy of hyper-kinetic bodily endeavour, with skydivers, abseilers and stunt BMX riders streaming the evidence of this awesomeness live from their own faces.
The very idea of the screen, here, has shifted from something you look at and has transformed into something you look through — a digital veil overlaid on the world like a kind of auxiliary consciousness. This is the cyborg dream at its most imminently available: Google Glass (essentially digital eyewear) might be on sale by the end of this year. Could it mark a potential escape from the tyranny of furniture into a future of strolling productivity?
Yet it’s also a hyper-reality that isn’t half as human-centric as it might appear at first. Consider Google’s cheery demo video of what wearable computing might be able to do for me. Accompanied by an aspirational soft-rock soundtrack, I stretch my arms, yawn, and browse a plethora of icons corresponding to online services in the middle of my field of vision. I make myself some coffee, check the time and my diary via another few icons, then float the weather forecast into view while looking outside the window. A friend asks if I fancy meeting up via another popup, to which I dictate a reply and head out. Handily, as I approach the subway, my glasses tell me it isn’t working and plot out a walking route instead, complete with real-time map and sequential directions.
And so on. There’s a great deal of emphasis on how my information-poor perceptions might be enhanced by integration with the internet — and how all manner of errors and inefficiencies will be ironed out along the way. Yet there’s little sense of how my ability to think my own thoughts, explore my own feelings or enjoy my own space will be similarly served, enhanced or encouraged. What’s on offer is, effectively, a smartphone strapped to my face.
If it means anything, intimacy is surely about what we are not willing to share; those things closest to us, both literally and metaphorically
This is all very well if my aim is to become a more effective operator of technological systems. However, if computing itself isn’t the primary objective — if I’m more interested in fomenting ideas and memories than in broadcasting a video of my daily exploits — the notion of wearable computing suddenly starts to seem, in this incarnation at least, not so much an escape from the desk and the sofa as an intensification of all that they represent.
In fact, there’s a surprising amount of common ground between the visions of progress represented by ergonomic office chairs and by Google Glass. In each case, the focus is not on people as such, but ‘people’ as incarnated within certain kinds of digital system: data points within a vast grid whose every need can be anticipated and answered by more precisely targeted information.
Distance, difference, fleshy frailties: all these are to be erased, while actuality itself is useful only as grist to the mill of content-generation and sharing (video, photos, audio, status updates!). Similarly, rather than you — your whole, embodied being — what the world really cares about is ‘you’ as represented by your avatar, profile, inbox, image, account, uploads, shares, likes, dislikes, group memberships, search history, purchase, orders and subscriptions.
This is the deal. No matter where you are, whom you’re with, or what you’re doing, it only counts if the system itself is counting.
I was born in 1980, meaning I missed out on many of the opportunities afforded to subsequent generations of shy, tech-savvy teens. Compelled to rely on a parental landline and face-to-face awkwardness for communication with the opposite sex, my first attempt at asking someone out for a date ended sufficiently badly for me to spend the years 1994 to 1996, inclusive, in a near-monastic state. I would have given a great deal for the opportunity to type my way into others’ affections, or simply to browse the social world from a safe distance.
What I longed for was something that I could understand. Other people were messy, strange creatures, who played games (with rules that they didn’t bother to explain). This is one reason why social media has proved stupendously successful: because they provide an enviable and historically unprecedented sense of control over friendships, relationships, interests, and ambitions. It’s all there to be browsed and selected, to be liked and commented upon.
The defining illusion of television is escape — the belief that burning hour after hour in front of the TV screen offers a refuge from the mundane world, even while it ever-more-deeply embeds us in the embrace of our sofas. But the defining illusion of interactive screens is agency. Suffused with feedback, an entire universe of data at our fingertips, we’re inclined to confuse knowledge with control, and information with comprehension. And, like my hypothetical teenage self, we’re grateful to be given the chance.
In a sense, it all comes back to what Foster Wallace labelled ‘Audience’, with a capital ‘A’: the transforming force of others’ simulated presence, and our presence simulated right back at them. Online, we are simultaneously author and audience, not to mention our own full-time publicist and agent. And we are lavishly talented at playing these roles. We are — don’t get me wrong — extremely lucky to be blessed by this apotheosis of human imagining and ingenuity.
Yet it’s also a heavy burden to heft — and all the more so for the infinite, weightless capacities of the medium within which we do so. If there’s only one lesson we should take from Kahneman et al, it is that every human illusion from consciousness up takes effort to maintain — and too much performance in one area can leave the rest of us stretched thin.
Ultimately, there is a symmetry between treating ourselves as disembodied and seeing our machines as a weightless other world
Consider the grand performance of incarnating ourselves online. It takes place courtesy of screens, wires, radio waves, incandescent dots and colours, together with the apparatus of content creation itself, from keyboards and cameras to website templates. Yet for it all to hang together, we must privilege these illusions over the merely real world surrounding us: the rooms, shelves, sofas, streets and people who uniquely share our time and place. We play, we pretend — quite brilliantly — and in return we are gifted mastery, barely sensing the embrace of other assumptions.
Perhaps that’s why the American technology journalist and author Paul Miller decided to live ‘off-line’ for a year. In the essay ‘Project Glass and the Epic History of Wearable Computers’ for The Verge magazine, he argued that ‘much of what passes for innovation these days is enclosed inside a very small space: a better way to check-in, or upload a photo, or manage your friend list’. This is the narrow zone within which every vision of progress is a further step towards data-led disembodiment: more content, more connection, faster and more ubiquitous computing, brimming the screens in our pockets and the overlays in front of our eyes. It’s an intoxicating offering. But it’s also a steady constriction of what it means to be us.
Is there another way? I would argue that there is, and that much of it lies apart from the maelstrom of ‘Audience’. If it means anything, intimacy is surely about what we are not willing to share; those things closest to us, both literally and metaphorically, through which we uniquely define ourselves.
Indeed, there are forms of enhancement that are about thickening our presence in a particular place at a particular moment in time, not turning our back on reality, and that help us to give a certain quality of time or attention to those around us, and ourselves. Similarly, there are ways of wearing our own tools more lightly and of using them to turn us more passionately towards reality — not to mention the intractable physicality of these self-same tools, which are neither massless nor placeless, no matter how many claims they may make to the contrary.
Ultimately, there is a symmetry between treating ourselves as disembodied and seeing our machines as a weightless other world. In each case, chains of true cause and effect are replaced by a kind of magical thinking, and the gifts of human illusion cross over into delusion.
‘Any sufficiently advanced technology,’ Arthur C Clarke wrote in 1973, ‘is indistinguishable from magic.’ It’s one of science fiction’s most famous maxims — and I’ve always hated it. Assuming that there’s no such thing as ‘real’ magic, and that what we mean when we talk about magic is someone being fooled by someone else, what is he actually saying: that, past a certain point, all we can do is gawp and applaud at the end of the show?
This won’t do. All the magic, after all, belongs not to these tools, but to us: in the stories we tell, the illusions we share. It’s ours, and we can withhold it if we see fit — refuse to clap, peek behind the curtain, tell the performers that we know there’s a trapdoor somewhere onstage. You don’t have to believe in magic to love it.
Quite the reverse, in fact. Just like belonging to any ‘Audience’, it isn’t proper fun unless everyone has tacitly agreed the rules. If only one side knows what’s going on, it’s no longer entertainment: it’s a con trick, and a price is being extracted.
This is our future. We’re playing better, brighter games than ever — and bringing them ever closer to the place where we hold ourselves. It’s terrific, and I’m thrilled to be on board for the ride. More than ever, though, we cannot afford to believe in magic, or to overlook the effortful divide between us as we actually are and ‘us’ as we appear on screen. Because the screen is only the beginning — and it will be a sad thing indeed if our best model for humanity’s self-invention remains a chunk of furniture.