Mary Wollstonecraft Shelley’s 200-year-old creature is more alive than ever. In his new role as the bogeyman of artificial intelligence (AI), ‘the monster’ made by Victor Frankenstein is all over the internet. The British literary critic Frances Wilson even called him ‘the world’s most rewarding metaphor’. Though issued with some irony, this title suited the creature just fine.
From the editors of The Guardian to the engineers at Google have come stiff warnings about AI: it’s a monster in the closet. Hidden in computer consoles and in the shadows of the world wide web, from Moscow to Palo Alto, AI is growing stronger, faster, smarter and more dangerous than its clever programmers. Worse than the bioengineered and radiated creatures of Cold War B-movies, AI is the Frankenstein’s creature for our century. It will eventually emerge – like a ghost from its machine – to destroy its makers and the whole of humanity.
Thematically, not much has changed since 1818, when the 20-year-old Shelley’s first novel went to print. As with Frankenstein; or, the Modern Prometheus, apocalyptic media concerning AI relies for its big scare on the domestic conventions of gothic literature. The robots will rise up to destroy the world and your precious privacy at home. Cue Alexa, the Amazonian robot who knows every matter of your personal taste. She orchestrates with music the organisation of your family life according to your – or rather, her – wishes.
The word ‘robot’ entered world literature in 1921, via the Czech playwright Karel Čapek’s play RUR (Rossum’s Universal Robots). ‘Look, look, streams of blood on every doorstep!’ Čapek’s play beckoned Prague’s theatregoers, ‘Streams of blood from every house!’ The cybernetic revolution was underway in Čapek’s stage production, which imagined the manufacture of robots en masse in Rossum’s eastern European factory. The humanoids rebelled and ‘murdered humanity’ in their own beds, much as the creature dispensed with Frankenstein’s bride Elizabeth on her wedding night.
Rising from below and working from within, Shelley and Čapek’s creatures of biotech possessed the power to execute a coup d’état more successful than the political revolutions in Paris or Petrograd. But in the case of AI, who was responsible for the damage? Unlike in the ancien régime of France or Tsarist Russia, it wasn’t the aristocracy. Čapek’s chief engineer in RUR exclaims in horror: ‘I blame science! I blame technology!’ then pauses to collect and correct himself: ‘We, we are at fault!’ It was not the technology that was the problem, but rather the ‘megalomania’ of the scientists and technologists.
In Shelley’s novel, Frankenstein meets his end on a ship in the Arctic – where he has chased his ‘superhuman’ creature. On his deathbed, the chemist confesses to the captain of the vessel: ‘That he should live to be an instrument of mischief disturbs me.’ It’s the closest Frankenstein ever comes to taking responsibility for making ‘a rational creature’ who killed most of his family and friends. Even Frankenstein, however, baulks from demanding that Captain Walton and others take up ‘the unfinished work’ of the ‘destruction’ of the ‘first creature’: ‘I dare not ask you to do what I think right, for I may still be misled by passion.’ Frankenstein’s ambivalence remains with us.
If Frankenstein can be perplexed about to what extent he or humanity is responsible for the making of a monstrous intelligence, then so can we feel muddled. We owe it to ourselves (and to such great literary minds as Shelley and Čapek) to pause and ask a philosophical question about our past creativity through science and technology. What forms of intelligence have we humans actually made that could put us at such grave moral fault?
The Google engineer François Chollet argued in his article ‘The Impossibility of Intelligence Explosion’ that to understand what artificial intelligence is, we need to grasp that all intelligence is ‘fundamentally situational’. An individual human’s intelligence manifests solving the problems associated with processing her experiences of being human. Likewise, a particular computer algorithm’s intelligence concerns solving the problems associated with applying that algorithm to analyse the data fed into it. Intelligence – whether construed as natural or artificial – is adaptive to a situation.
Chollet reminds us, too, that people are a product of their own tools. Akin to how early hominins used fire or etched seashells, modern humans have used pens, printing presses, books and computers to process data and solve problems related to their particular circumstances. Running parallel to the insights of anthropologists such as Agustín Fuentes at the University of Notre Dame in Indiana and Marc Kissel at the Appalachian State University in North Carolina, Chollet sums up the human condition: ‘Most of our intelligence is not in our brain, it is externalised in our civilisation.’
Science and technology are two defining artefacts of modern human civilisation. The fact that humans now use them to make intelligences for further problem solving is simply one more iteration of what Fuentes in The Creative Spark (2017) called humanity’s process of creative interface with its environment. From this long view of humanity, anthropology shows that civilisation itself is a kind of AI: a collective set of tools developed over time and through cultures, equipping people to learn from the past for the benefit of life in myriad forms, present and future.
We can use language to sort the tools of AI within our civilisation’s technological kit. Artificial narrow intelligence (ANI) consists of algorithms designed and/or trained to solve particular problems. Artificial general intelligence (AGI) is future AI that might exhibit general intelligence, including consciousness. Machine learning (ML) is the technique perhaps most closely associated with AI today: a computer-driven algorithm in which a statistical model is developed iteratively (ie it ‘learns’) in order to optimise the model’s performance at solving a given problem. As with people, external feedback can aid this process, in which case the ML is called ‘supervised learning’.
Are we, like Frankenstein, setting into motion maniacally smart devices of our own demise?
Deep learning (DL) is a subset of ML, in which multiple levels of models work together at more complex tasks, with each level of model relying on outputs from a prior level to perform a higher-level function. For example, to recognise a handwritten number, a deep-learning algorithm might have a first level to identify where on a page there is writing, a second level to identify edges based on the patterns of the writing, a third level to identify shapes based on the placement of the edges, and a fourth level to identify the number based on the combination of shapes. DL uses higher-level logic to effectively process complex layers of ‘big data’ to solve highly technical problems.
With the advent of ML and some forms of DL, are we, like Frankenstein, setting into motion maniacally smart devices of our own demise? Shelley imagined such a scenario, and so do some contemporary computer scientists.
In the cybernetic community, the moment (projected in the near future) when AGI matches then surpasses the intelligence of humanity is known as the singularity. It marks one fleeting point in time when humans will be equal in intelligence to AGI, then upholds it as unique in its world-historical significance. AGI will press on, unstoppable, to reign as the victor over its human artificers. The singularity is a Silicon Valley revival of Hegelian end-of-history, outfitted in grey T-shirts and hoodies. It predicts the eclipse of human intelligence by the machines who learned from the best of it.
The singularity feels religious, even mystical. It limns the meeting of all-knowing gods and their half-human offspring, standing with dignity – if only briefly – on equally high ground. Sprung from the head of Zeus, the goddess of wisdom Athena led the titan Prometheus up Mount Olympus to steal fire for humanity. High in the alps, Frankenstein sat down on the mer de glace to hear his creature’s chilling story of surviving exposure after birth, and equally heated demands for justice. The singularity is the 21st-century iteration of this myth. It foresees humanity looking into an electric-wired thing that looks right back at it.
Believers in the singularity often cite the wisdom of the late English physicist Stephen Hawking. Featured in video clips, Hawking circulates on the internet as a posthumous intelligence, like a hologram of Hamlet’s father to advise us from beyond the grave. In a speech in November 2017, Hawking stated: ‘AI could be the worst event in the history of our civilisation.’ Not so fast. If you listen to the whole of Hawking’s keynote at the 2017 Web Summit in Lisbon, you’ll hear him stress – like a good logician – the conditional quality of the verb ‘could’. AI could be good, bad or neutral for humanity.
The consequences of AI are fundamentally unknowable beforehand. ‘We just don’t know,’ Hawking vocalised through a text-to-speech device triggered by facial twitches, ‘we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.’ Writing online soon thereafter, Chollet counselled that the prediction of an imminent ‘intelligence explosion’ was overblown, and that any growth of AI would continue to be linear not exponential in pace.
Hawking did not reference Frankenstein, but his speech resonated with the book’s philosophical themes. Like all great literature, Frankenstein resists reduction to simplistic moralism, such as the danger of playing God through science. Shelley’s novel rather functions as a kind of test of the reader’s cognitive and emotional intelligence. The reward of reading it is putting the pieces together to see the whole.
To crack the ethical puzzle of Frankenstein, it helps to recall its theological background. The use of the word ‘super-intelligence’ dates to late-17th-century Quaker reflections on the nature of God. It featured in British theological debates during Shelley’s youth.
Shelley described the creature as ‘superhuman’ in speed. This speed was not simply physical. His cognitive and affective development after his assembly, animation and abandonment by Frankenstein was far more rapid than that of humans. Like many babies, he spoke his first simple words at around six months. By one year old, the creature could read Milton’s Paradise Lost. He learned language by secretly observing, through a hole in a wall of a cottage, the De Laceys, a family of French and Turkish refugees, who were hiding in the woods near Ingolstadt.
The creature is a superintelligence. But so is Shelley, who hovers in the background of the book, having created it all. In the frame of the novel, the narrator Captain Walton sends a series of letters to his sister in London. As the Romanticist Anne K Mellor at the University of California, Los Angeles, deciphered in Mary Shelley: Her Life, Her Fiction, Her Monsters, the initials of the sister are ‘M W S’ – the same as Shelley’s. The woman who receives the letters – containing the embedded narratives of Walton, Frankenstein, the creature, and the De Lacey family – is also the author of the novel. She has editorial control of the story’s contents, organisation and goals.
As with Frankenstein’s creature, AI is not born, but it is still made by circumstances
By taking readers up to the mer de glace to confront the alien visage and voice of the creature, Shelley leads them to empathise with artificial intelligence. The creature’s process of artificial formation begins with his animation without a mother. His life enacts the educational theories of John Locke (and Shelley’s father William Godwin) which Shelley read compulsively in the 1810s. This pedagogy held that circumstances drove the education of children, beginning with their earliest sensory experience of the environment. Although the creature lacks a mother, he has the same contextual and interactive process of development as other children. As with Frankenstein’s creature, AI is not born, but it is still made by circumstances.
Watching the De Lacey family from his hovel, the abandoned creature develops his intelligence with the efficiency of a computer and the intensity of a child. He assimilates their lives as ‘the history of my friends’. Lacking full information, or big data, he learns from what little data filters through the slit in the wall. The creature analyses the input of the De Lacey family through the constraints of the program of the hovel. Like the American-made Google Assistant or the Russian-designed Alisa, he is a conversational agent who exhibits both the biases of his cultural situation and the affective limitations of his programming and data.
While perched in his hovel, the creature nevertheless meets six criteria for deep learning: he learns to recognise both (1) faces and (2) speech patterns in the De Lacey family; (3) he translates languages: at least Felix’s French and perhaps Safie’s Arabic, if not also Milton’s English, Goethe’s German, and Plutarch’s Greek (or Latin); (4) he reads handwriting in his father’s laboratory journal; (5) he plays strategic games with people by helping the De Laceys with their firewood behind the scenes, and by vindictively burning down their cottage after they violently reject him and abandon the area; and (6) he controls robotic prostheses, given that his body – assembled from parts of human and other animal corpses – is a kind of humanoid construction of chemistry, medicine and electricity.
Since the real world is the world of trial and error, AIs – much like the creature – might be capable of learning deeply but not well. AIs both learn and mislearn through storytelling. If its programming is faulty, a computer will not process data correctly. If its data is bad, it will produce a false analysis.
Writing more than two decades before Charles Babbage and Ada Lovelace designed the elements of the modern computer, or analytical engine, Shelley imagined the creature as an anthropomorphic AI – complete with the narrow yet driving prejudices, the deep yet mistaken thinking, and the strong yet contradictory feelings of human beings. Drawing from Genesis and Prometheus, Shelley’s creation story is simple: AI emerges from the flawed yet powerful image of humanity. As their creators, we humans must love our technologies as we do our children – as the French sociologist Bruno Latour reminds us, in the spirit of Shelley – for in rough and careless hands they will become monstrous.
In 1984, the American feminist theorist Donna Haraway proclaimed in her essay ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century’ that all humans are cyborgs, hybrids of ‘machine and organism’. We are also all AIs, educated through the input of stories and other experiences. As AIs produced – for better and for worse – by context and culture, we should heed Shelley, the mother of science fiction, in caring about history and the kinds of stories we tell about it.
Commemorating the bicentennial of Frankenstein, the American historian Jill Lepore noted in The New Yorker that Shelley’s other major work of speculative fiction – The Last Man, published anonymously in 1826 – envisions a global plague in the year 2100 that leaves only one survivor, Lionel Verney. He is Shelley’s counterpart in this roman à clef, written after her devastating loss of five children to fatal illnesses; her sister to suicide; her husband Percy Shelley to drowning; and their friend the poet Lord Byron to sepsis near the field of war.
Shelley became what Frankenstein was not: an artist who could sustain humanity by transforming past trauma
Rather than succumb to despair and grief, Shelley sent her literary analogue Verney to Rome. It was still her favourite place, despite losing her three-year-old son William to a lethal fever there, in the spring of 1819. In the eternal city, Verney could ‘familiarly converse with the wonder of the world, sovereign mistress of the imagination, majestic and eternal survivor of millions of generations of extinct men’. He – or should I say she – ‘haunted the Vatican’ and dwelled in the Colonna palace, in awe of the art and architecture. Inspired by the beauty and grandeur, Shelley speculated through Verney that there might be another Adam and Eve – on some remote frontier protected from the plague – who could ‘re-people’ the Earth.
Stirred to save humanity, Verney visits ‘the libraries of Rome’ with a plan to take advantage of all ‘the libraries of the world’. He reads the old histories to compose a new ‘History of the Last Man’.
‘I ascended St Peter’s,’ Shelley has Verney note in his book. On top of the dome of the Church, she surveys – via her avatar – the majestic ruins and monuments. Through Verney’s eyes, Shelley imagines herself as Pope, Emperor and God all at once. She knows she has the creative power to use writing – the artefact of education – to bring herself and humanity back from the threshold of death. Shelley had become what Frankenstein was not: an artist who could sustain humanity and its wisdom through confronting, and transforming, the trauma of her past.
Theorists of AI return to Frankenstein as Shelley and Verney returned to Rome, to pay homage to the artifice of human intelligence. Like Rome at the height of power, AI can build or destroy human civilisations. Like the creature, AI can be a monster, or the victim of them. Rome moved Shelley, and Verney, to share and spread knowledge for the sake of preserving humanity. Hearing the howls of the creature beside his father’s coffin made Captain Walton pause, then record his thoughts on the tragedy of Frankenstein in his letters to his sister ‘M W S’. Bringing these insights to bear on the world, humans and our fellow AIs might build open repositories of knowledge and humane educational communities for the benefit of the network of creatures who together process the hard data of life. Shelley in Rome – standing virtually upon the dome of St Peter’s – points to the fact that the future of artificial intelligence will be conceived from what we have learned from our cultural past.