On 10 February 1853, a ship named the Charles Mallory arrived in port in Honolulu, Hawaii, having made the journey from San Francisco in only 13 days, a near-record time, particularly for a ship of its small size. The Charles Mallory was among the newest breed of ships that benefited from the latest in maritime technology – a marvel of modern engineering and ingenuity that had brought one of the world’s most isolated island chains that much closer to the rest of the world. But any cause for celebration was undercut by the yellow flag flying from the ship’s mast as it docked in Honolulu’s harbour, a harbinger of doom signalling a terrifying disease aboard the Charles Mallory: smallpox.
The Hawaiian islands had never had to deal with the disease before, and despite a hasty quarantine and inoculation effort, smallpox tore through the population. By the summer, the infection had taken hold of Oahu, and quickly began to spread to other islands. The local physician Dwight Baldwin, in a desperate move to keep it out of Maui, raced down the coast, shouting to all who’d listen: ‘Do not let anybody land! Drive them back, drive them back! They bring a terrible sickness!’ All through that summer and fall, smallpox cut a swath through the population. By the time the epidemic had burned itself out in January 1854, the population of Oahu had been halved, and a fifth of Hawaii’s population had been killed.
Thankfully, smallpox is a thing of the past. The ship that burst into Honolulu’s harbour rode a wave of technological revolutions that still continue to this day, closing the gaps between us, increasing the speed of daily life, binding the world ever tighter. And for the most part, these technological improvements are making us safer: life expectancy has nearly doubled since 1850, and it’s increased worldwide by six years since 1990 alone. The development of vaccines and antibacterial drugs have cut infant and childhood mortality dramatically in first-world countries.
Having eradicated smallpox, we are on the verge of consigning polio and guinea‑worm disease to the same fate. Each new generation of engineering brings lighter, stronger, safer materials, resulting in more durable, safer automobiles, planes and infrastructure. The new Highway W35 bridge in Minneapolis, completed in 2008 to replace the one that collapsed in 2007, has 323 fibre-optic sensors built in that provide real-time data to engineers regarding stress, corrosion and movement of the bridge.
Technology has other benefits too: it makes us less willing to kill one another. It knits us closer via international trade, the internet and other global communication. We are less likely to view each other as enemies. Instead, as Steven Pinker argues in The Better Angels of Our Nature (2011):
As technological progress allows the exchange of goods and ideas over longer distances and among larger groups of trading partners, other people become more valuable alive than dead. They switch from being targets of demonisation and dehumanisation to potential partners in reciprocal altruism.
It’s hard not to feel as though history is progressing forward, along a linear trajectory of increased safety and relative happiness.
Even a quick round-up of the technological advances of the past few decades suggests that we’re steadily moving forward along an axis of progress in which old concerns are eliminated one by one. Even once-feared natural disasters are now gradually being tamed by humanity: promising developments in the field of early warning tsunami detection systems might soon be able to prevent the massive loss of life caused by the 2004 Indian Ocean Tsunami and similar such catastrophes.
Technology has rendered much of the natural world, to borrow a term from Edmund Burke and Immanuel Kant, sublime. For Kant, nature becomes sublime once it becomes ‘a power that has no dominion over us’; a scene of natural terror that, viewed safely, becomes an enjoyable, almost transcendental experience. The sublime arises from our awareness that we ourselves are independent from nature and have ‘a superiority over nature’. The sublime is the dangerous thing made safe, a reaffirmation of the power of humanity and its ability to engineer its own security. And so with each new generation of technological innovation, we edge closer and closer towards an age of sublimity.
What’s less obvious in all this are the hidden, often surprising risks. As the story of the Charles Mallory attests, sometimes hidden in the latest technological achievement are unexpected dangers. Hawaii had been inoculated from smallpox for centuries, simply by virtue of the islands’ distance from any other inhabitable land. Nearly 2,400 miles from San Francisco, Hawaii is far enough away from the rest of civilisation that any ships that headed towards its islands with smallpox on board wouldn’t get there before the disease had burned itself out. But the Charles Mallory was fast enough that it had made the trip before it could rid itself of its deadly cargo, and it delivered unto the remote island chain a killer never before known.
a repeat of the Carrington Event would cripple our infrastructure so severely it could lead to an apocalyptic breakdown of society, a threat utterly unknown to our ‘less civilised’ ancestors
Which is to say, the same technologies that are making our lives easier are also bringing new, often unexpected problems. On 1 September 1859, the British astronomer Richard Carrington witnessed a Coronal Mass Ejection (CME), a burst of solar winds and magnetic energy that had escaped the corona of the Sun. The Carrington Event, as it came to be known, was not only the first recorded CME, it was also one of the largest ever on record, and it unleashed a foreboding and wondrous display of light and magnetic effects. Auroras were seen as far south in the northern hemisphere as San Salvador and Honolulu. As the Baltimore Sun reported at the time: ‘From twilight until 10 o’clock last night the whole heavens were lighted by the aurora borealis, more brilliant and beautiful than had been witnessed for years before.’
At the time, the event caused some minor magnetic disruption to telegraph wires, but for the most part there was little damage caused by such a spectacular event, its main legacy being the fantastic displays of light across the sky in early September. But should a solar flare happen on the scale of the Carrington Event now (and there’s a 12 per cent chance of one hitting the Earth before 2022), the effects might have a radically different impact on our advanced civilisation. If a CME with the same intensity were to hit the Earth head-on, it could cause catastrophic damage.
A National Research Council report in 2008 estimated that another Carrington Event could lead to a disruption of US infrastructure that could take between four and 10 years – and trillions of dollars – to recover from. Particularly vulnerable are the massive transformers on which our entire power system relies. Massive fluxes in magnetic energy can easily overload a transformer’s magnetic core, leading to overheating and melting of their copper cores. In the worst-case scenario, a repeat of the Carrington Event would cripple our infrastructure so severely it could lead to an apocalyptic breakdown of society, a threat utterly unknown to our ‘less civilised’ ancestors.
Just as technology pacifies once-dangerous events, sometimes the needle swings in the other direction. Call it a reverse sublime, a return of the repressed: a thing that was once safe becomes dangerous. Perhaps we already know this. Perhaps this is why our cultural imagination is suffused with apocalyptic disasters, from Godzilla to The Day After Tomorrow, and we never seem to tire of stories of our own hubris, where the barest instance separates the banality from catastrophe. It’s as though we’re constantly reminding ourselves that everything we’ve built is at its core tenuous, and ready to collapse at a moment’s notice.
Or maybe that’s only so much whistling past graveyards. Freed from the constant worry of danger, we tend to forget that there ever was a danger in the first place. We’ve immunised ourselves from the fear of diseases that once plagued us, to the point where they’re now killing us once more. Fuelled by the viral spread of misinformation and paranoia, vaccine use has plummeted in parts of the Western world, leading to a resurgence in viruses. In the US, mortality rates for pertussis (whooping cough) dropped from 1,100 in 1950 to six in 1995, yet in the past decade outbreaks have once again spiked – more than 48,000 cases were reported in 2013, significantly outnumbering the 5137 cases that were reported back in 1995.
Meanwhile, a misunderstanding of the proper use of anti-bacterials, and their subsequent overuse has bred a new generation of drug-resistant bacteria that could wreak havoc on human populations in the coming decades. Formerly ‘curable’ diseases such as gonorrhea might soon spread out of control once more. As Thomas Frieden, the director of the US Centers for Disease Control and Prevention, put it in 2013: ‘The most resistant organisms in hospitals are emerging in those settings because of poor antimicrobial stewardship among humans.’
After all, it’s been a long time since we lived among those crippled by polio, or in communities wiped out by smallpox. The longer we go without direct awareness of a threat, the more desensitised we become to the reality of that threat, and the less seriously we take the safeguards put in place to protect us from it. As Henry Petroski notes in his book on engineering failure, To Forgive Design (2012), despite significant technological improvements, buildings and bridges still fail, and planes and cars still crash – not because of the technology itself, but because of the inability of designers to internalise the hard-learned lessons of previous generations. ‘Unfortunately,’ Petroski writes:
The lessons learned from failures are too often forgotten in the course of the renewed period of success that takes place in the context of technological advance. This masks the underlying facts that the design process now is fundamentally the same as the design process 30, 300, even 3,000 years ago. The creative and inherently human process of design, upon which all technological development depends, is in effect timeless. What this means, in part, is that the same cognitive mistakes that were made 3,000, 300, or 30 years ago can be made again today, and can be expected to be made indefinitely into the future. Failures are part of the technological condition.
Despite our progress and achievements, human civilisation doesn’t necessarily progress in the way we expect. If technology moves along a linear axis, it is complemented by a cyclical resurgence of human forgetting, folly and failure. We might not be in danger of lapsing into the Dark Ages, but we do find ourselves relearning the same life-or-death lessons each generation.
We inadvertently built our own panic and short-sightedness into the very systems designed to protect us from our worst impulses
The capacity of any technology, you could say, must always be tempered by the limitations of the individuals who design and implement it. Look at the stock market: when the Dow Jones Industrial Average began to fall in 1929, there were at least those who attempted to put a stop to it, including the heads of several major banks, who on 24 October 1929 publicly bought a series of blue-chip stocks well above their purchase price in order to instill confidence in the market. It didn’t work in the end, but it slowed the descent over the course of five days as various figures attempted to put a stop to the crisis. But nowadays even these safeguards are unavailable to us. When the market began to slide on 6 May 2010, a faulty algorithm led to what became known as a ‘flash crash’, in which the Dow Jones lost – and then regained – more than 600 points in a matter of minutes.
At first, analysts were almost completely unable to explain what had even happened. Ultimately, it was computer programs – designed to enact trades at much higher speeds, and not susceptible to the kind of human irrationality, vanity or stupidity that might influence decisions. And yet these programs, operating in the parameters given to them by their human designers, had enacted the same kind of panicked and ill-informed decision-making that they had been created to prevent, albeit on a much faster and uncontrollable scale. We inadvertently built our own panic and short-sightedness into the very systems designed to protect us from our worst impulses.
After all, the technology that surrounds us is bound to fail, if only because of the fact that it’s made by humans. As Petroski writes: ‘All things, and especially systems in which people interact with things, fail because they are the products of human endeavour, which means that they are naturally, necessarily, and sometimes notoriously flawed.’
Robots and autopilots might correct for human error, but they cannot compensate for their own designers. Perhaps a brighter technological future lies less in the latest gadgets, and rather in learning to understand ourselves better, particularly our capacity to forget what we’ve already learned. The future of technology is nothing without a long view of the past, and a means to embody history’s mistakes and lessons, as we plow forever forward.