In central London this spring, eight of the world’s greatest minds performed on a dimly lit stage in a wood-panelled theatre. An audience of hundreds watched in hushed reverence. This was the closing stretch of the 14-round Candidates’ Tournament, to decide who would take on the current chess world champion, Viswanathan Anand, later this year.
Each round took a day: one game could last seven or eight hours. Sometimes both players would be hunched over their board together, elbows on table, splayed fingers propping up heads as though to support their craniums against tremendous internal pressure. At times, one player would lean forward while his rival slumped back in an executive leather chair like a bored office worker, staring into space. Then the opponent would make his move, stop his clock, and stand up, wandering around to cast an expert glance over the positions in the other games before stalking upstage to pour himself more coffee. On a raised dais, inscrutable, sat the white-haired arbiter, the tournament’s presiding official. Behind him was a giant screen showing the four current chess positions. So proceeded the fantastically complex slow-motion violence of the games, and the silently intense emotional theatre of their players.
When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback?
Such a dismissive attitude would be in tune with the spirit of the times. Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.
If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment
But in chess, at least, the algorithm has not displaced human judgment. The imperfectly human players who contested the last round of the Candidates’ Tournament — in a thrilling finish that, thanks to unusual tiebreak rules, confirmed the 22-year-old Norwegian Magnus Carlsen as the winner, ahead of former world champion Vladimir Kramnik — were watched by an online audience of 100,000 people. In fact, the host of the streamed coverage, the chatty and personable international master Lawrence Trent, pointedly refused to use a computer engine (which he called ‘the beast’) for his own analyses and predictions. The idea, he explained, is to try to figure things out for yourself. During a break in the commentary room on the day I was there, Trent was eating crisps and still eagerly discussing variations with his plummily amusing co-presenter, Nigel Short (who himself had contested the World Championship against Kasparov in 1993). ‘He’ll find Qf4; it’s not difficult to find,’ Short assured Trent. ‘Ng8, then it’s…’ ‘It’s game over.’ ‘Game over!’
Chess is an Olympian battle of wits. As with any sport, the interest lies in watching profoundly talented humans operating at the limits of their capability. There does exist a cyborg version of the game, dubbed ‘advanced chess’, in which humans are allowed to use computers while playing. But it is profoundly boring to watch, like a contest over who can use spreadsheet software more effectively, and hasn’t caught on. The ‘beast’ can be a useful helpmeet — Veselin Topalov, a previous challenger for Anand’s world title, used a 10,000-CPU monster in his preparation for that match, which he still lost — but it’s never going to be the main event.
This is a lesson that the algorithm-boosters in the wider culture have yet to learn. And outside the Platonically pure cosmos of chess, when we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed.
At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerised cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves. What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013).
More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?
What Marcus’s example demonstrates is the fact that driving a car is not simply a technical operation, of the sort that machines can do more efficiently. It is also a moral operation. (His example is effectively a kind of ‘trolley problem’, of the sort that has lately been fashionable in moral philosophy.) If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment.
Meanwhile, as Morozov relates, a single Californian company called Impermium provides software to tens of thousands of websites to automatically flag online comments for ‘not only spam and malicious links, but all kinds of harmful content — such as violence, racism, flagrant profanity, and hate speech’. How do Impermium’s algorithms decide exactly what should count as ‘hate speech’ or obscenity? No one knows, because the company, quite understandably, isn’t going to give away its secrets. Yet rather than pursuing mere lexicographical analysis, such a system of automated pre-censorship is, again, making moral judgments.
If self-driving cars and speech-policing systems are going to make hard moral decisions for us, we have a serious stake in knowing exactly how they are programmed to do it. We are unlikely to be content simply to trust Google, or any other company, not to code any evil into its algorithms. For this reason, Morozov and other thinkers say that we need to create a class of ‘algorithmic auditors’ — trusted representatives of the public who can peer into the code to see what kinds of implicit political and ethical judgments are buried there, and report their findings back to us. This is a good idea, though it poses practical problems about how companies can retain the commercial edge provided by their computerised secret sauce if they have to open up their algorithms to quasi-official scrutiny.
If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’
A further problem is that some algorithms positively must be kept under wraps in order to work properly. It is already possible, for example, for malicious operators to ‘game’ Google’s autocomplete results — sending abusive or libellous descriptions to the top of Google’s suggestions when you type a person’s name — and lawsuits from people affected in this way have already forced the company to delve into the system and change such examples manually. If it were made public exactly how Google’s PageRank algorithm computes the authority of web pages, or how Twitter’s ‘trending’ algorithm determines the popularity of subjects, then unscrupulous self-marketers or vengeful exes would soon be gaming those algorithms for their own purposes too. The vast majority of users would lose out, because the systems would become less reliable.
And it doesn’t necessarily require a malicious individual gaming a system for algorithms to get uncomfortably personal. Automatic analysis of our smartphone geolocation, internet-browsing and social-media data-trails grows ever more sophisticated, and so we can thin-slice demographic categories ever more precisely. From such information it is possible to infer personal details (such as sexual orientation or use of illegal drugs) that have not been explicitly supplied, and sometimes to identify unique individuals. Even when such information is simply used to target adverts more accurately, the consequences can be uncomfortable. Last year, the journalist Charles Duhigg related a telling anecdote in an article for The New York Times called ‘How Companies Learn Your Secrets’. A decade ago, the American retailer Target sent promotional baby-care vouchers to a teenage girl in Minneapolis. Her father was so outraged, he went to the shop to complain. The manager was equally taken aback and apologised; a few days later, he called the family to apologise again. This time, it was the father who offered an apology: his daughter really was pregnant, and Target’s ‘predictive analytics’ system knew it before he did.
Such automated augury might be considered relatively harmless if its use is confined to figuring out what products we might like to buy. But it is not going to stop there. One day in the near future — perhaps this has already happened — an innocent crime novelist researching bloody techniques for his latest fictional serial killer will find armed men banging on his door in the middle of the night, because he left a data trail that caused lights to flash red in some preventive-policing algorithm. Perhaps a few distressed writers is a price we are willing to pay to prevent more murders. But predictive crime prevention is an area that leads rapidly to a dystopian sci-fi vision like that of the film Minority Report (2002).
In Baltimore and Philadelphia, software is already being used to predict which prisoners will reoffend if released. The software works on a crime database, and variables including geographic location, type of crime previously committed, and age of prisoner at previous offence. In so doing, according to a report in Wired in January this year, ‘The software aims to replace the judgments parole officers already make based on a parolee’s criminal record.’ Outsourcing this kind of moral judgment, where a person’s liberty is at stake, understandably makes some people uncomfortable. First, we don’t yet know whether the system is more accurate than humans. Secondly, even if it is more accurate but less than completely accurate, it will inevitably produce false positives — resulting in the continuing incarceration of people who wouldn’t have reoffended. Such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible. How do you hold an algorithm responsible?
Still more science-fictional are recent reports claiming that brain scans might be able to predict recidivism by themselves. According to a press release for the research, conducted by the American non-profit organisation the Mind Research Network, ‘inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region’. Twice as likely, of course, is not certain. But imagine, for the sake of argument, that eventually a 100 per cent correlation could be determined between certain brain states and future recidivism. Would it then be acceptable to deny people their freedom on such an algorithmic basis? If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’. In a different context, such algorithm-driven diagnosis could be used positively: according to one recent study at Duke University in North Carolina, there might be a neural signature for psychopathy, which the researchers at the laboratory of neurogenetics suggest could be used to devise better treatments. But to rely on such an algorithm for predicting recidivism is to accept that people should be locked up simply on the basis of facts about their physiology.
If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture. In the latter realm, the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable. For if they become very popular, algorithmic systems could end up destroying what they feed on.
In the early days of Amazon, the company employed a panel of book critics, whose job was to recommend books to customers. When Amazon developed its algorithmic recommendation engine — an automated system based on data about what others had bought — sales shot up. So Amazon sacked the humans. Not many people are likely to weep hot tears over a few unemployed literary critics, but there still seems room to ask whether there is a difference between recommendations that lead to more sales, and recommendations that are better according to some other criterion — expanding readers’ horizons, for example, by introducing them to things they would never otherwise have tried. It goes without saying that, from Amazon’s point of view, ‘better’ is defined as ‘drives more sales’, but we might not all agree.
Algorithmic recommendation engines now exist not only for books, films and music but also for articles on the internet. There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it. So what’s wrong with letting the bots have a go? Viktor Mayer-Schönberger is professor of internet governance and regulation at Oxford University; Kenneth Cukier is the data editor of The Economist. In their book Big Data (2013) — which also calls for algorithmic auditors — they sing the praises of one Californian company, Prismatic, that, in their description, ‘aggregates and ranks content from across the Web on the basis of text analysis, user preferences, social-network-related popularity, and big-data analytics’. In this way, the authors claim, the company is able to ‘tell the world what it ought to pay attention to better than the editors of The New York Times’. We might happily agree — so long as we concur with the implied judgment that what is most popular on the internet at any given time is what is most worth reading. Aficionados of listicles, spats between technology theorists, and cat-based modes of pageview trolling do not perhaps constitute the entire global reading audience.
So-called ‘aggregators’ — websites, such as the Huffington Post, that reproduce portions of articles from other media organisations — also deploy algorithms alongside human judgment to determine what to push under the reader’s nose. ‘The data,’ Mayer-Schönberger and Cukier explain admiringly, ‘can reveal what people want to read about better than the instincts of seasoned journalists’. This is true, of course, only if you believe that the job of a journalist is just to give the public what it already thinks it wants to read. Some, such as Cass Sunstein, the political theorist and Harvard professor of law, have long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views. Improved algorithms seem destined to amplify such effects.
Some aggregator sites have also been criticised for paraphrasing too much of the original article and obscuring source links, making it difficult for most readers to read the whole thing at the original site. Still more remote from the source is news packaged by companies such as Summly — the iPhone app created by the British teenager Nick D’Aloisio — which used another company’s licensed algorithms to summarise news stories for reading on mobile phones. Yahoo recently bought Summly for $USD30 million. However, the companies that produce news often depend on pageviews to sell the advertising that funds the production of their ‘content’ in the first place. So, to use algorithm-aided aggregators or summarisers in daily life might help to render the very creation of content less likely in the future. In To Save Everything, Click Here, Evgeny Morozov draws a provocative analogy with energy use:
Our information habits are not very different from our energy habits: spend too much time getting all your information from various news aggregators and content farms who merely repackage expensive content produced by someone else, and you might be killing the news industry in a way not dissimilar from how leaving gadgets in the standby mode might be quietly and unnecessarily killing someone’s carbon offsets.
Meanwhile in education, ‘massive open online courses’ known as MOOCs promise (or threaten) to replace traditional university teaching with video ‘lectures’ online. The Silicon Valley hype surrounding these MOOCs has been stoked by the release of new software that automatically marks students’ essays. Computerised scoring of multiple-choice tests has been around for a long time, but can prose essays really be assessed algorithmically? Currently, more than 3,500 academics in the US have signed an online petition that says no, pointing out:
Computers cannot ‘read’. They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organisation, clarity, and veracity, among others.
It would not be surprising if these educators felt threatened by the claim that software can do an important part of their job. The overarching theme of all MOOC publicity is the prospect of teaching more people (students) using fewer people (professors). Will what is left really be ‘teaching’ worth the name?
One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.
If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.
What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number- crunching, things can go alarmingly wrong.
Indeed, a backlash to algorithmic fetishism is already under way — at least in those areas where a dysfunctional algorithm’s effect is not some gradual and hard-to-measure social or cultural deterioration but an immediate difference to the bottom line of powerful financial organisations. High-frequency trading, where automated computer systems buy and sell shares very rapidly, can lead to the price of a security fluctuating wildly. Such systems were found to have contributed to the ‘flash crash’ of 2010, in which the Dow Jones index lost 9 per cent of its value in minutes. Last year, the New York Stock Exchange cancelled trades in six stocks whose prices had exhibited bizarre behaviour thanks to a rogue ‘algo’ — as the automated systems are known in the business — run by Knight Capital; as a result of this glitch, the company lost $440 million in 45 minutes. Regulatory authorities in Europe, Hong Kong and Australia are now proposing rules that would require such trading algorithms to be tested regularly; in India, an algo cannot even be deployed unless the National Stock Exchange is allowed to see it first and decides it is happy with how it works.
Here, then, are the first ‘algorithmic auditors’. Perhaps their example will prompt similar developments in other fields — culture, education, and crime — that are considerably more difficult to quantify, even when there is no immediate cash peril.
A casual kind of post-facto algorithmic auditing was already in evidence in London, at the Candidates’ Tournament. All the chess players gave press conferences after their games, analysing critical positions and showing what they were thinking. This often became a second contest in itself: players were reluctant to admit that they had missed anything (‘Of course, I saw that’), and vied to show they had calculated more deeply than their adversaries. On the day I attended, the amiable Anglophile Russian player (and cricket fanatic) Peter Svidler was discussing his colourful but peacefully concluded game with Israel’s Boris Gelfand, last year’s World Championship challenger. Juggling pieces on a laptop screen with a mouse, Svidler showed a complicated line that had been suggested by someone using a computer program. ‘This, apparently, is a draw,’ Svidler said, ‘but there’s absolutely no way anyone can work this out at the board’. The computer’s suggestion, in other words, was completely irrelevant to the game as a sporting exercise.
Now, as the rumpled Gelfand looked on with friendly interest, Svidler jumped to an earlier possible variation that he had considered pursuing during their game, ending up with a baffling position that might have led either to spectacular victory or chaotic defeat. ‘For me,’ he announced, ‘this will be either too funny … or not funny enough’. Everyone laughed. As yet, there is no algorithm for wry comedy.
13 May 2013