Menu
Aeon
DonateNewsletter
SIGN IN
Film still of the Star Wars character Jar Jar Binks with large eyes and floppy ears standing in front of a group of similarly armed figures in a battle scene.

General Jar Jar Binks in Star Wars Episode 1; The Phantom Menace. Photo courtesy Lucasfilm/20th Century Fox

i

Numbing the imagination

CGI has become wearingly dull and cliched. Can its deep weirdness be recovered and filmgoers’ minds stretched again?

by Jonathan Romney + BIO

General Jar Jar Binks in Star Wars Episode 1; The Phantom Menace. Photo courtesy Lucasfilm/20th Century Fox

James Cameron’s underwater adventure The Abyss (1989) is widely considered the first film to have successfully made extensive use of computer-generated imagery, or CGI, to show audiences something they could never witness otherwise. The film’s visual trump card shows a deep-sea explorer, played by Mary Elizabeth Mastrantonio, encountering a sentient strand of water – a transparent, shimmering tube that reacts to her every movement. Mastrantonio pokes the strand, and it ripples; she smiles, and the water takes the shape of her face, imitating her smile.

The Abyss used CGI to inspire the viewer with amazement and delight – responses enacted on screen by Mastrantonio. Yet today the film’s special effects look somewhat rudimentary compared with the minutely textured CGI illusions we have grown accustomed to seeing; what strikes us now is the unblemished innocence of the Mastrantonio character’s reactions.

Over time, CGI creations have become increasingly precise and life-like. We’ve seen giant robots, dinosaurs and shape-shifting mutants, all rendered with photorealistic exactness and composed from multitudes of pixels (a pixel, or picture element, is a single dot on a screen, the fundamental ‘atom’ of digital imagery). We’ve witnessed every shade of apocalyptic weather and terrestrial or galactic cataclysm. Where CGI once gave us discrete images persuasively implanted (or ‘composited’) into realistic filmed space – as with the dinosaurs of Jurassic Park (1993) – today entire environments are partly or wholly simulated, with human actors sometimes the only elements in a scene to have been captured photographically, as in the fantasy worlds of the Harry Potter film series (2001-11) and Peter Jackson’s The Lord of the Rings trilogy (2001-3).

Such marvels belong to a visual domain that not long ago seemed impossible on screen, but is today taken for granted by viewers who’ve grown up with little exposure to pre-digital cinema. The miraculous has become the norm. Special effects apart, digital techniques are now standard in cinema, with production and exhibition on celluloid relegated to the industry’s margins. Traditional animation, too, has been largely superseded by its digital successor in the wake of Pixar’s breakthrough Toy Story (1995).

With digital spectacle now so prevalent, it is less likely to impress. Each year, the studios visibly strain to ignite ever more dazzling CGI firecrackers to attract young target audiences. The commercial imperative is to make it better, more novel, more thrilling – or, failing that, just bigger. Viewers might justifiably feel jaded when exposed to the same sights over and again, with ever more coercive intensity. Writing on the website RogerEbert.com in May, the US critic Matt Zoller Seitz lamented ‘the enervating sight of huge things crashing into other huge things’. Blockbusters, such as Man of Steel (2013) and Pacific Rim (2013), operate to a formula of permanent apocalypse, routinely, repetitively, staging the massive total destruction of cities and spaceships.

Our weariness with such images stems partly from becoming accustomed to the visually formulaic. Tim McHugh, an FX specialist in California, has complained that familiarity sets in for him when watching CGI crowd scenes because he knows how they’re done: ‘Oh, they’ve got that new particle generator.’ We all remember gasping at the unprecedentedly complex, vivid war scenes in the first of the Lord of the Rings trilogy; but, once you’ve seen phalanx upon phalanx of murderous orcs marching relentlessly forth, it’s harder to be impressed by similar spectacles involving people, centaurs or armoured polar bears. The average viewer might not know when a film is using an effects package such as the crowd-generating MASSIVE (Multiple Agent Simulation System in Virtual Environment), but watching the battles in fantasy films, you begin to sense that you’re watching the same software.

Such a surfeit of wonders may be de-sensitising, but it’s also eroding our ability to dream at the movies. Many CGI-enhanced films leave little to the imagination; the standard practice in blockbuster cinema is to show everything that can be shown, in as graphically detailed, mimetically literal a form as possible (the studios don’t want audiences complaining they’re not getting their money’s worth). But is there something inherent to CGI that contributes to a sense of diminishing affective returns?

The awareness that we’re not seeing real things but illusions is not itself a problem: cinema has specialised in fakery ever since the silent-era pioneer Georges Méliès created a dazzling panoply of trompe l’oeil effects, both in front of the camera and within it. But screen trickery was more easily detectable in pre-digital times. Now it is much harder to tell when our perception is being fooled: beyond creating marvels, digital effects manipulate images in more elusive ways.

CGI’s potential as a science of subtraction was first revealed in Robert Zemeckis’s Forrest Gump (1994), in which Gary Sinise, playing a Vietnam veteran, had his legs amputated by digital erasure. In films that present themselves as entirely realistic, such invisible mending can involve placing reflections in glass, adding fog or snow, erasing wires in stunt scenes. I’d argue that the very knowledge that such manipulation is possible is itself a drawback, undermining our absolute immersion in a fiction.

Earlier types of cinematic trompe l’oeil tended to signal their presence by their imperfection; techniques such as back projection and matte painting were often visible as such, and that visibility was part of their appeal, openly inviting viewers to participate in the illusion, to suspend disbelief. The essentially covert nature of digital manipulation – its effects seamlessly sewn into the very texture of the screen image – is more problematic, and has left contemporary cinema with a pervasive undertone of ambivalence. Watching Spike Jonze’s Her (2013), I consciously found myself assuming that scenes set in a futuristic-looking Los Angeles were really taking place on digitally generated ‘virtual’ sets. Then I wondered if they might be real. Later I learned that the film’s LA amalgamates the real city with Shanghai, and that the filmmakers had actually made a point of reducing CGI to a minimum. But the flicker of uncertainty caused me momentarily to lose the narrative thread: if we start to question what we see, we break a film’s spell.

One tradition in writing about cinema, represented notably by the mid-20th-century French critic André Bazin, asserts the primacy of the photographic capture of the real – the recording on film of objects that have actually existed, events that have actually happened. Digital cinema rewrites that conception, because we can no longer assume that a screen image represents anything that has ever been real. A landscape might be a composite of several actual landscapes, or wholly or partly fabricated from pixels. Film theory has been forced to confront a radical change in its object of study.

Stephen Prince, professor of cinema studies at Virginia Tech, noted in his essay ‘True Lies’ (1996) that CGI severs the ‘indexical’ or causal connection between an image and the object it represents, which might have no original in the real world; instead, we are presented with imaginary objects that can nonetheless be considered ‘perceptually realistic’. Another theorist, Lev Manovich, at the City University of New York, has argued that CGI reveals that the conception of photographic recording as essential to cinema was a historic accident, and that the new digital regime returns cinema to its place in an earlier conception of visual representation as involving the manual construction of images. ‘Cinema becomes a particular branch of painting – painting in time,’ he writes in ‘What Is Digital Cinema?’ (1996).

For Bazin, however, the recording of real presences, of people’s real engagement in the material world, comprised a crucial ethical dimension of cinema. And this dimension cannot disappear without making a difference.

Yes, it’s exciting, up to a point, to watch a purely digital Spider-Man swinging through a digital Manhattan. But we lose the knowledge, or at least the possibility, that real stunt people have risked their necks to entertain us. Without the real human presence and exceptional human skills – of the sort that gave breathtaking edge to action westerns, or to the breakneck comedies of Buster Keaton and Harold Lloyd – the stakes are lowered. Cinema’s ‘how-did-they-do-that?’ factor diminishes as we become accustomed to the idea that they did it all with a keyboard. As the digital FX supervisor Paddy Eason has put it: ‘Once you know you’re looking at a bunch of pixels – then why not just go and play a videogame?’

Another drawback of images composed of ‘a bunch of pixels’ is a sense of deadness, of the inorganic. This is partly to do with the smoothness of digital imagery, as opposed to images shot and projected on celluloid, which retain the trace of film stock’s chemical grain. But there is also a problem in the making of the images. In CGI, everything has been deliberately programmed for specific effect – a suppression of accident, resulting in imagery that seems to lack expressive autonomy, organic-seeming ‘heft’, as opposed to the weightlessness of pure light or data. Eager to remedy this lack, software creators have now made it possible to program lifelike randomness, or ‘noise’, into digital motion.

invading spaceships that appear over New York resemble giant millipedes that unfurl like endless Escher staircases

Some filmmakers have responded ingeniously to our new awareness of the mechanics of screen illusion. The Wachowskis’ sci-fi thriller The Matrix (1999), for example, overtly pursued the notion of cinematic trompe l’oeil into philosophical realms; its leitmotif is the image of seemingly solid objects disintegrating into cascades of digits. This theme was elaborated by Christopher Nolan’s Inception (2010) – about agents who fabricate dreams and implant them in people’s minds. A somewhat Brechtian distanciation is part of Inception’s thrill: we not only see a lot of surreal and spectacular imagery, we also get to revel in its manifest unreality, which is no longer perceived as a lack but as a crucial ingredient of the film’s appeal. Inception’s architect character Ariadne dreams of ‘the chance to build cathedrals, entire cities – things that never existed, things that couldn’t exist in the real world’. Arguably, the film boasts CGI’s most impressive ‘folding’ effect yet, as Paris is spectacularly turned over on itself like a crêpe.

With CGI, we’ve seen a shift in the imagination and representation of three‑dimensional space, a kind of mind-bending geometric ingenuity. Take the shape-shifting robots of Michael Bay’s Transformers (2007-) films; the series might have become a byword for witless bombast, yet there’s something truly remarkable in the way that its metal behemoths unfold themselves from one complex form (fighter plane, streamlined sports car) into another, with diverse parts clicking hyper-intricately in and out of place. Similarly in The Avengers (2012), where the invading spaceships that appear over New York resemble giant millipedes that unfurl like endless Escher staircases.

And it’s not just objects that unfurl; so does the space around them, and this phenomenon of folding and unfolding is one of the genuine innovations of CGI cinema. This unanchoring of spatial fixities can be seen at its most sophisticated in a film that is altogether sui generis – a $100-million special-effects production that is nonetheless genuinely experimental. Alfonso Cuarón’s Gravity (2013) makes us experience the dilemma of an astronaut left floating in space, and makes us feel as dizzily untethered as its heroine, played by Sandra Bullock. The ‘virtual camera’, and the groundbreaking work of the visual effects supervisor Tim Webber, allow Cuarón and the cinematographer Emmanuel Lubezki to create unusually long takes that weave intricate trajectories in space. When the camera brings us close to Bullock’s character, we feel we’re floating with her, in every conceivable direction.

Digital cinema has long been motivated by a Promethean ambition to create new forms of life. Jurassic Park proved that was possible, at least when it came to creatures with a hard scaly surface. Convincing fur and hair eluded CGI’s capabilities for much longer, but those barriers have now been surmounted – as witness the uncannily rich photo-realism and life-like energy of the tiger created for Ang Lee’s Life of Pi (2012). Attempts to create humans – or non‑human beings suffused with humanoid presence – are harder still. The first more or less convincing digital star – or ‘synthespian’, a catchword that already sounds archaic – was the entirely computer-generated Aki Ross, heroine of the photorealistic animation Final Fantasy (2001). While much publicity was devoted to the fact that her coiffure comprised 60,000 separately animated hairs, this elegantly inert digi-diva lacked anything approaching human charisma.

More successful were the humanoid aliens in James Cameron’s Avatar (2009), a film that self-reflexively enacted the process by which actors’ performances could be decanted, as it were, into digital images. In this film, humans have their consciousnesses implanted into otherworldly ‘avatars’ – CGI creations, but infused with the presence of real actors. Thus, a realistic-looking 10ft blue female has the recognisable voice and acting style of Sigourney Weaver. Avatar is an example of the use of ‘performance capture’, by which actors’ bodies, gestures and – in recent developments – facial expressions are scanned, transformed into data and applied to computer models for animation.

This technique can misfire. In Steven Spielberg’s The Adventures of Tintin (2011), real actors’ performances are encoded into characters that look neither realistic nor entirely cartoonish, but inhabit an awkward intermediate realm between photography and drawing. The alienating quality of such creations recalls a phenomenon described by computer games and robotics experts as the ‘Uncanny Valley’: artificial humanoids can look increasingly lifelike, until a point at which the resemblance to human form becomes eerie and repellent, as if we were watching something neither properly living nor dead.

That is why CGI excels at creating non-human forms. The British actor Andy Serkis has established himself as a specialist in performance capture, having memorably incarnated the goblin-like Gollum in The Lord of the Rings, at that point the most compelling digital humanoid yet witnessed. Then he played Caesar in Rise of the Planet of the Apes (2011), providing the soul for this alpha chimp imbued with quasi-human expressivity. Viewers might wonder if Rise is an example of the very process it depicts: just as the story’s apes supplant the humans of Earth’s future, the various CGI chimps and bonobos are so characterful that their human co-stars look somewhat redundant.

her face has the flawless sheen of Photoshop-adjusted models in cosmetics ads – an uncanny smoothness that leads us to make our own inferences about her character

CGI’s miracles need not always be flamboyant or deployed in creating the spectacularly unreal. Increasingly, art cinema has used the technology to great effect, sometimes in a minor key, but significantly to ends other than the creation of immersive thrills. For example, CGI has provided a new way for cinema to evoke the subtly unworldly textures once associated with poetic filmmakers such as Jean Cocteau. Witness the phantasmagorical ‘prelude’ to Lars von Trier’s Melancholia (2011) a series of eerie doomsday tableaux, like paintings brought to life in slow motion, and as close to representing the authentic texture of dream as anything I have seen in cinema.

Other art films have used CGI for subtle but suggestive retouching. In Pedro Almodóvar’s The Skin I Live In (2011), Elena Anaya plays a woman whose beauty is the creation of a deranged plastic surgeon; in close-ups, her face has the flawless sheen seen on Photoshop-adjusted models in cosmetics ads – an uncanny smoothness that leads us to make our own inferences about the unnatural nature of the character. It’s possible that CGI’s scrupulous attention to texture might be engendering a new level of aesthetic appreciation of cinema, encouraging us to take a deeper sensuous pleasure in the tactile quality of images, the synthetic silkiness of human flesh, the luxuriance of tiger fur, the roughness of computer-generated rock.

It seems churlish to complain about being disappointed by a technology that in a short time has shown us so many marvels. But most CGI cinema is prone to overkill, to grinding thematic repetition and a deadening literalism. Critics deafened by one superhero slamdown after another might yearn for a digital cinema of poetry and abstraction but, when it comes to commercial productions, the logic of the marketplace inevitably dominates.

In reality, CGI’s greatest potential might not be in the realm of abstraction but in stories where the true wonder – that is, the richest affective and intellectual content – emerges from an interplay between the digital and the human, the unearthly and the mundane. Jonathan Glazer’s Under the Skin (2013), about an alien’s experiences on Earth, contains some unnerving digital effects, but their power stems partly from their juxtaposition with human reality: quasi-documentary sequences show Scarlett Johansson, as the alien, driving a van round the outskirts of Glasgow talking to real-life passengers who had no idea they were being filmed, let alone driven by a lightly disguised Johansson. In Jacques Audiard’s Rust and Bone (2012), Marion Cotillard plays a woman who loses her legs in an accident. Cotillard is subject to the same digital amputation pioneered in Forrest Gump, but what strikes us is less the marvelous effect of the trickery than the entirely natural way that Cotillard performs to accommodate the illusion emotionally and make it real.

The same fascination with the human is what affects us when we see Anaya’s eyes gazing out of a ‘fake’ face in The Skin I Live In, Bullock’s response to freefall in Gravity and Mastrantonio’s simulated wonder as she gazes in The Abyss at a chimera that’s not really there. For all the brilliance of digital artists and technicians, it is human presence and human response on screen that bring CGI to life, and no doubt will continue to do so – perhaps more alluringly, and in a way that is less in thrall to the aesthetic dictates of the box office. What CGI still has to offer remains to be seen. We might feel that we’ve seen too much already, but perhaps we ain’t seen nothing yet.