Menu
Aeon
DonateNewsletter
SIGN IN
A giant aubergine hovering over a cityscape with high-rise buildings. The sky is a peachy pink in the background.

Illustration by Matt Murphy via Handsome Frank

i

Should we be afraid of AI?

Machines seem to be getting smarter and smarter and much better at human jobs, yet true AI is utterly implausible. Why?

by Luciano Floridi + BIO

Illustration by Matt Murphy via Handsome Frank

Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Good’s ‘intelligence explosion’ is a serious risk, and the end of our species might be near, if we’re not careful. This is Stephen Hawking in 2014:

The development of full artificial intelligence could spell the end of the human race.

Last year, Bill Gates was of the same view:

I am in the camp that is concerned about superintelligence. First the machines will do a lot of jobs for us and not be superintelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this, and don’t understand why some people are not concerned.

And what had Musk, Tesla’s CEO, said?

We should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.

The reality is more trivial. This March, Microsoft introduced Tay – an AI-based chat robot – to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became an evil Hitler-loving, Holocaust-denying, incestual-sex-promoting, ‘Bush did 9/11’-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

Let me be more specific. Philosophy doesn’t do nuances well. It might fancy itself a model of precision and finely honed distinctions, but what it really loves are polarisations and dichotomies. Internalism or externalism, foundationalism or coherentism, trolley left or right, zombies or not zombies, observer-relative or observer-independent, possible or impossible worlds, grounded or ungrounded … Philosophy might preach the inclusive vel (‘girls or boys may play’) but too often indulges in the exclusive aut aut (‘either you like it or you don’t’).

The current debate about AI is a case in point. Here, the dichotomy is between those who believe in true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen (I am the happy owner of all three). Think instead of the false Maria in Metropolis (1927); Hal 9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). You’ve got the picture. Believers in true AI and in Good’s ‘intelligence explosion’ belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Let’s have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of ‘we must do something now or it will be too late’, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely ‘could’, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldn’t it? Yes it could. But this ‘could’ is mere logical possibility – as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between ‘I could be sick tomorrow’ when I am already feeling unwell, and ‘I could be a butterfly that dreams it’s a human being.’

How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear

There is no contradiction in assuming that a dead relative you’ve never heard of has left you $10 million. That could happen. So? Contradictions, like happily married bachelors, aren’t possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the ‘could’ is not the ‘could happen’ of an earthquake, but the ‘it isn’t true that it couldn’t happen’ of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favourite reference is Moore’s Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moore’s Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever, as The Economist put it in 2014:

Throughout recorded history, humans have reigned unchallenged as Earth’s dominant species. Might that soon change? Turkeys, heretofore harmless creatures, have been exploding in size, swelling from an average 13.2lb (6kg) in 1929 to over 30lb today. On the rock-solid scientific assumption that present trends will persist, The Economist calculates that turkeys will be as big as humans in just 150 years. Within 6,000 years, turkeys will dwarf the entire planet. Scientists claim that the rapid growth of turkeys is the result of innovations in poultry farming, such as selective breeding and artificial insemination. The artificial nature of their growth, and the fact that most have lost the ability to fly, suggest that not all is lost. Still, with nearly 250m turkeys gobbling and parading in America alone, there is cause for concern. This Thanksgiving, there is but one prudent course of action: eat them before they eat you.

From Turkzilla to AIzilla, the step is small, if it weren’t for the fact that a growth curve can easily be sigmoid, with an initial stage of growth that is approximately exponential, followed by saturation, slower growth, maturity and, finally, no further growth. But I suspect that the representation of sigmoid curves might be blasphemous for Singularitarianists.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet. One example will suffice: almost 700 million people have no access to safe water. This is a major threat to humanity. Oh, and just in case you thought predictions by experts were a reliable guide, think twice. There are many staggeringly wrong technological predictions by experts (see some hilarious ones from David Pogue and on Cracked.com). In 2004 Gates stated: ‘Two years from now, spam will be solved.’ And in 2011 Hawking declared that ‘philosophy is dead’ (so what’s this you are reading?).

The prediction of which I am most fond is by Robert Metcalfe, co-inventor of Ethernet and founder of the digital electronics manufacturer 3Com. In 1995 he promised to ‘eat his words’ if proved wrong that ‘the internet will soon go supernova and in 1996 will catastrophically collapse’. A man of his word, in 1997 he publicly liquefied his article in a food processor and drank it. I wish Singularitarians were as bold and coherent as him.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers – AItheists – make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

This is why there is so much that computers (still) cannot do, loosely the title of several publications – Ira Wilson (1970); Hubert Dreyfus (1972; 1979); Dreyfus (1992); David Harel (2000); John Searle (2014) – though what precisely they can’t do is a conveniently movable target. It is also why they are unable to process semantics (of any language, Chinese included, no matter what Google translation achieves). This proves that there is absolutely nothing to discuss, let alone worry about. There is no genuine AI, so a fortiori there are no problems caused by it. Relax and enjoy all these wonderful electric gadgets.

AItheists’ faith is as misplaced as the Singularitarians’. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the world’s most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow (disclaimer: I am a member of Google’s Advisory Council on the right to be forgotten), so surely Google must know something about the real chances of developing a computer that can think, that we, outside ‘The Circle’, are missing? Eric Schmidt, Google’s executive chairman, fuelled this view, when he told the Aspen Institute in 2013: ‘Many people in AI believe that we’re close to [a computer passing the Turing test] within the next five years.’

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s. Let me offer a bet. I hate aubergine (eggplant), but I shall eat a plate of it if a software program passes the Turing test and wins the Loebner Prize gold medal before 16 July 2018. It is a safe bet.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question ‘Can a machine think?’ is ‘too meaningless to deserve discussion’. (Ironically, or perhaps presciently, that question is engraved on the Loebner Prize medal.) This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

This is what I have defined as the Fourth Revolution in our self-understanding. We are not at the centre of the Universe (Copernicus), of the biological kingdom (Charles Darwin), or of rationality (Sigmund Freud). And after Turing, we are no longer at the centre of the infosphere, the world of information processing and smart agency, either. We share the infosphere with digital technologies. These are ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the world’s best player because it could use a database of around 30 million moves and play thousands of games against itself, ‘learning’ how to improve its performance. It is like a two-knife system that can sharpen itself. What’s the difference? The same as between you and the dishwasher when washing the dishes. What’s the consequence? That any apocalyptic vision of AI can be disregarded. We are and shall remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real challenges. By way of conclusion, let me list five of them, all equally important.

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AI’s stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AI’s predictive power work for freedom and autonomy. Marketing products, influencing behaviours, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that ‘we shape our buildings and afterwards our buildings shape us’. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dante’s Inferno: ‘Speak not of them, but look, and pass them by.’ For the world needs some good philosophy, and we need to take care of more pressing problems.