Slaves to the algorithm

by 3800 3,800 words
  • Read later or Kindle
    • KindleKindle

Slaves to the algorithm

'When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess.' Photo by Jeffrey Sylvester

Computers could take some tough choices out of our hands, if we let them. Is there still a place for human judgment?

Steven Poole is a British journalist, broadcaster and composer. His latest book is Who Touched Base in My Thought Shower? (2013).

3800 3,800 words
  • Read later
    • KindleKindle

In central London this spring, eight of the world’s greatest minds performed on a dimly lit stage in a wood-panelled theatre. An audience of hundreds watched in hushed reverence. This was the closing stretch of the 14-round Candidates’ Tournament, to decide who would take on the current chess world champion, Viswanathan Anand, later this year.

Each round took a day: one game could last seven or eight hours. Sometimes both players would be hunched over their board together, elbows on table, splayed fingers propping up heads as though to support their craniums against tremendous internal pressure. At times, one player would lean forward while his rival slumped back in an executive leather chair like a bored office worker, staring into space. Then the opponent would make his move, stop his clock, and stand up, wandering around to cast an expert glance over the positions in the other games before stalking upstage to pour himself more coffee. On a raised dais, inscrutable, sat the white-haired arbiter, the tournament’s presiding official. Behind him was a giant screen showing the four current chess positions. So proceeded the fantastically complex slow-motion violence of the games, and the silently intense emotional theatre of their players.

When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback?

Such a dismissive attitude would be in tune with the spirit of the times. Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish.

If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment

But in chess, at least, the algorithm has not displaced human judgment. The imperfectly human players who contested the last round of the Candidates’ Tournament — in a thrilling finish that, thanks to unusual tiebreak rules, confirmed the 22-year-old Norwegian Magnus Carlsen as the winner, ahead of former world champion Vladimir Kramnik — were watched by an online audience of 100,000 people. In fact, the host of the streamed coverage, the chatty and personable international master Lawrence Trent, pointedly refused to use a computer engine (which he called ‘the beast’) for his own analyses and predictions. The idea, he explained, is to try to figure things out for yourself. During a break in the commentary room on the day I was there, Trent was eating crisps and still eagerly discussing variations with his plummily amusing co-presenter, Nigel Short (who himself had contested the World Championship against Kasparov in 1993). ‘He’ll find Qf4; it’s not difficult to find,’ Short assured Trent. ‘Ng8, then it’s…’ ‘It’s game over.’ ‘Game over!’

Chess is an Olympian battle of wits. As with any sport, the interest lies in watching profoundly talented humans operating at the limits of their capability. There does exist a cyborg version of the game, dubbed ‘advanced chess’, in which humans are allowed to use computers while playing. But it is profoundly boring to watch, like a contest over who can use spreadsheet software more effectively, and hasn’t caught on. The ‘beast’ can be a useful helpmeet — Veselin Topalov, a previous challenger for Anand’s world title, used a 10,000-CPU monster in his preparation for that match, which he still lost — but it’s never going to be the main event.

This is a lesson that the algorithm-boosters in the wider culture have yet to learn. And outside the Platonically pure cosmos of chess, when we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed.

At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerised cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves. What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013).

More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?

What Marcus’s example demonstrates is the fact that driving a car is not simply a technical operation, of the sort that machines can do more efficiently. It is also a moral operation. (His example is effectively a kind of ‘trolley problem’, of the sort that has lately been fashionable in moral philosophy.) If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment.

Meanwhile, as Morozov relates, a single Californian company called Impermium provides software to tens of thousands of websites to automatically flag online comments for ‘not only spam and malicious links, but all kinds of harmful content — such as violence, racism, flagrant profanity, and hate speech’. How do Impermium’s algorithms decide exactly what should count as ‘hate speech’ or obscenity? No one knows, because the company, quite understandably, isn’t going to give away its secrets. Yet rather than pursuing mere lexicographical analysis, such a system of automated pre-censorship is, again, making moral judgments.

If self-driving cars and speech-policing systems are going to make hard moral decisions for us, we have a serious stake in knowing exactly how they are programmed to do it. We are unlikely to be content simply to trust Google, or any other company, not to code any evil into its algorithms. For this reason, Morozov and other thinkers say that we need to create a class of ‘algorithmic auditors’ — trusted representatives of the public who can peer into the code to see what kinds of implicit political and ethical judgments are buried there, and report their findings back to us. This is a good idea, though it poses practical problems about how companies can retain the commercial edge provided by their computerised secret sauce if they have to open up their algorithms to quasi-official scrutiny.

If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’

A further problem is that some algorithms positively must be kept under wraps in order to work properly. It is already possible, for example, for malicious operators to ‘game’ Google’s autocomplete results — sending abusive or libellous descriptions to the top of Google’s suggestions when you type a person’s name — and lawsuits from people affected in this way have already forced the company to delve into the system and change such examples manually. If it were made public exactly how Google’s PageRank algorithm computes the authority of web pages, or how Twitter’s ‘trending’ algorithm determines the popularity of subjects, then unscrupulous self-marketers or vengeful exes would soon be gaming those algorithms for their own purposes too. The vast majority of users would lose out, because the systems would become less reliable.

And it doesn’t necessarily require a malicious individual gaming a system for algorithms to get uncomfortably personal. Automatic analysis of our smartphone geolocation, internet-browsing and social-media data-trails grows ever more sophisticated, and so we can thin-slice demographic categories ever more precisely. From such information it is possible to infer personal details (such as sexual orientation or use of illegal drugs) that have not been explicitly supplied, and sometimes to identify unique individuals. Even when such information is simply used to target adverts more accurately, the consequences can be uncomfortable. Last year, the journalist Charles Duhigg related a telling anecdote in an article for The New York Times called ‘How Companies Learn Your Secrets’. A decade ago, the American retailer Target sent promotional baby-care vouchers to a teenage girl in Minneapolis. Her father was so outraged, he went to the shop to complain. The manager was equally taken aback and apologised; a few days later, he called the family to apologise again. This time, it was the father who offered an apology: his daughter really was pregnant, and Target’s ‘predictive analytics’ system knew it before he did.

Such automated augury might be considered relatively harmless if its use is confined to figuring out what products we might like to buy. But it is not going to stop there. One day in the near future — perhaps this has already happened — an innocent crime novelist researching bloody techniques for his latest fictional serial killer will find armed men banging on his door in the middle of the night, because he left a data trail that caused lights to flash red in some preventive-policing algorithm. Perhaps a few distressed writers is a price we are willing to pay to prevent more murders. But predictive crime prevention is an area that leads rapidly to a dystopian sci-fi vision like that of the film Minority Report (2002).

In Baltimore and Philadelphia, software is already being used to predict which prisoners will reoffend if released. The software works on a crime database, and variables including geographic location, type of crime previously committed, and age of prisoner at previous offence. In so doing, according to a report in Wired in January this year, ‘The software aims to replace the judgments parole officers already make based on a parolee’s criminal record.’ Outsourcing this kind of moral judgment, where a person’s liberty is at stake, understandably makes some people uncomfortable. First, we don’t yet know whether the system is more accurate than humans. Secondly, even if it is more accurate but less than completely accurate, it will inevitably produce false positives — resulting in the continuing incarceration of people who wouldn’t have reoffended. Such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible. How do you hold an algorithm responsible?

Still more science-fictional are recent reports claiming that brain scans might be able to predict recidivism by themselves. According to a press release for the research, conducted by the American non-profit organisation the Mind Research Network, ‘inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region’. Twice as likely, of course, is not certain. But imagine, for the sake of argument, that eventually a 100 per cent correlation could be determined between certain brain states and future recidivism. Would it then be acceptable to deny people their freedom on such an algorithmic basis? If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’. In a different context, such algorithm-driven diagnosis could be used positively: according to one recent study at Duke University in North Carolina, there might be a neural signature for psychopathy, which the researchers at the laboratory of neurogenetics suggest could be used to devise better treatments. But to rely on such an algorithm for predicting recidivism is to accept that people should be locked up simply on the basis of facts about their physiology.

If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture. In the latter realm, the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable. For if they become very popular, algorithmic systems could end up destroying what they feed on.

In the early days of Amazon, the company employed a panel of book critics, whose job was to recommend books to customers. When Amazon developed its algorithmic recommendation engine — an automated system based on data about what others had bought — sales shot up. So Amazon sacked the humans. Not many people are likely to weep hot tears over a few unemployed literary critics, but there still seems room to ask whether there is a difference between recommendations that lead to more sales, and recommendations that are better according to some other criterion — expanding readers’ horizons, for example, by introducing them to things they would never otherwise have tried. It goes without saying that, from Amazon’s point of view, ‘better’ is defined as ‘drives more sales’, but we might not all agree.

Algorithmic recommendation engines now exist not only for books, films and music but also for articles on the internet. There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it. So what’s wrong with letting the bots have a go? Viktor Mayer-Schönberger is professor of internet governance and regulation at Oxford University; Kenneth Cukier is the data editor of The Economist. In their book Big Data (2013) — which also calls for algorithmic auditors — they sing the praises of one Californian company, Prismatic, that, in their description, ‘aggregates and ranks content from across the Web on the basis of text analysis, user preferences, social-network-related popularity, and big-data analytics’. In this way, the authors claim, the company is able to ‘tell the world what it ought to pay attention to better than the editors of The New York Times’. We might happily agree — so long as we concur with the implied judgment that what is most popular on the internet at any given time is what is most worth reading. Aficionados of listicles, spats between technology theorists, and cat-based modes of pageview trolling do not perhaps constitute the entire global reading audience.

So-called ‘aggregators’ — websites, such as the Huffington Post, that reproduce portions of articles from other media organisations — also deploy algorithms alongside human judgment to determine what to push under the reader’s nose. ‘The data,’ Mayer-Schönberger and Cukier explain admiringly, ‘can reveal what people want to read about better than the instincts of seasoned journalists’. This is true, of course, only if you believe that the job of a journalist is just to give the public what it already thinks it wants to read. Some, such as Cass Sunstein, the political theorist and Harvard professor of law, have long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views. Improved algorithms seem destined to amplify such effects.

Some aggregator sites have also been criticised for paraphrasing too much of the original article and obscuring source links, making it difficult for most readers to read the whole thing at the original site. Still more remote from the source is news packaged by companies such as Summly — the iPhone app created by the British teenager Nick D’Aloisio — which used another company’s licensed algorithms to summarise news stories for reading on mobile phones. Yahoo recently bought Summly for $USD30 million. However, the companies that produce news often depend on pageviews to sell the advertising that funds the production of their ‘content’ in the first place. So, to use algorithm-aided aggregators or summarisers in daily life might help to render the very creation of content less likely in the future. In To Save Everything, Click Here, Evgeny Morozov draws a provocative analogy with energy use:
Our information habits are not very different from our energy habits: spend too much time getting all your information from various news aggregators and content farms who merely repackage expensive content produced by someone else, and you might be killing the news industry in a way not dissimilar from how leaving gadgets in the standby mode might be quietly and unnecessarily killing someone’s carbon offsets.

Meanwhile in education, ‘massive open online courses’ known as MOOCs promise (or threaten) to replace traditional university teaching with video ‘lectures’ online. The Silicon Valley hype surrounding these MOOCs has been stoked by the release of new software that automatically marks students’ essays. Computerised scoring of multiple-choice tests has been around for a long time, but can prose essays really be assessed algorithmically? Currently, more than 3,500 academics in the US have signed an online petition that says no, pointing out:
Computers cannot ‘read’. They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organisation, clarity, and veracity, among others.

It would not be surprising if these educators felt threatened by the claim that software can do an important part of their job. The overarching theme of all MOOC publicity is the prospect of teaching more people (students) using fewer people (professors). Will what is left really be ‘teaching’ worth the name?

One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.

If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice.

What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number- crunching, things can go alarmingly wrong.

Indeed, a backlash to algorithmic fetishism is already under way — at least in those areas where a dysfunctional algorithm’s effect is not some gradual and hard-to-measure social or cultural deterioration but an immediate difference to the bottom line of powerful financial organisations. High-frequency trading, where automated computer systems buy and sell shares very rapidly, can lead to the price of a security fluctuating wildly. Such systems were found to have contributed to the ‘flash crash’ of 2010, in which the Dow Jones index lost 9 per cent of its value in minutes. Last year, the New York Stock Exchange cancelled trades in six stocks whose prices had exhibited bizarre behaviour thanks to a rogue ‘algo’ — as the automated systems are known in the business — run by Knight Capital; as a result of this glitch, the company lost $440 million in 45 minutes. Regulatory authorities in Europe, Hong Kong and Australia are now proposing rules that would require such trading algorithms to be tested regularly; in India, an algo cannot even be deployed unless the National Stock Exchange is allowed to see it first and decides it is happy with how it works.

Here, then, are the first ‘algorithmic auditors’. Perhaps their example will prompt similar developments in other fields — culture, education, and crime — that are considerably more difficult to quantify, even when there is no immediate cash peril.

A casual kind of post-facto algorithmic auditing was already in evidence in London, at the Candidates’ Tournament. All the chess players gave press conferences after their games, analysing critical positions and showing what they were thinking. This often became a second contest in itself: players were reluctant to admit that they had missed anything (‘Of course, I saw that’), and vied to show they had calculated more deeply than their adversaries. On the day I attended, the amiable Anglophile Russian player (and cricket fanatic) Peter Svidler was discussing his colourful but peacefully concluded game with Israel’s Boris Gelfand, last year’s World Championship challenger. Juggling pieces on a laptop screen with a mouse, Svidler showed a complicated line that had been suggested by someone using a computer program. ‘This, apparently, is a draw,’ Svidler said, ‘but there’s absolutely no way anyone can work this out at the board’. The computer’s suggestion, in other words, was completely irrelevant to the game as a sporting exercise.

Now, as the rumpled Gelfand looked on with friendly interest, Svidler jumped to an earlier possible variation that he had considered pursuing during their game, ending up with a baffling position that might have led either to spectacular victory or chaotic defeat. ‘For me,’ he announced, ‘this will be either too funny … or not funny enough’. Everyone laughed. As yet, there is no algorithm for wry comedy.

Read more essays on automation & robotics, computing & artificial intelligence, future of technology and general culture


  • Skanik

    We seem to be looking for Guardians for the Algorithms that drive us, our Society and
    our Technology.

    But how many humans have advanced degrees in the Humanities, Economics
    and Computer Science and are wise enough to know how to properly understand
    all that is is going on at the speed computers calculate and predict and act ?

    I have to say that I do not understand why so much trust is being put into computers
    and the programs/algorithms that run them. My computer 'screws up' every day
    and I often shake my head in wonderment as to why some programmer added this
    or that feature which can be activated just because my fingers slipped off a key.
    As for those who write the programs/algorithms...most of them are wise when it
    comes to computers but hopelessly naive when it comes to human needs and

    As for Computers and Chess - why not Ten by Ten Chess where the two extra
    pieces are chosen at random before each game so that game is played more
    by humans than computers.

    • M Collings

      Again technology is looking for us to abdicate our responsibilities and choices and hand them over to gatekeepers and philosopher kings. All in the name of that thin veneer called "convenience".

    • Austin Duke

      Your computer doesn't screw up everyday, that's you making a user error. Your finger slipping off a key and initializing some other prompt your didn't intend but commanded anyway isn't the computer screwing up, it's you making a mistake. We but trust in computers because of the vast efficiency they have provided us. Are they perfect? No. But they make far fewer mistakes at certain subsets of tasks than humans do and that is why we trust them.

    • Jonas

      People like algorithms because we haven't learned how to blame them yet. E.g. see the response by Austin Duke. You can't sue an algorithm, so even if it's close to acceptable, it's preferable from the point of view of the business. It's all benefit no downside, until we develop a way to hold them accountable.

      It's the same reason Kings liked bureaucracies. Now even more nameless and faceless!

  • Lester

    There is no such thing as a value free end-result.

    Algorithms present the idea of a value-neutral process that presents a value-neutral end product. But all they really do is imbed their values deeper within themselves. In this respect they reflect the idea of a kind of bureaucratic technological apolitical society that functions in a value-free zone. And of course this is powerful ideology in disguise.

    And interestingly algorithms reflect the reductionist idea that there is an ultimate correct action to take in any situation (hidden values again). But material reductionism also predicts a determinist universe which would inevitably take only the predetermined choice. So are algorithms pointless or are we supposed to believe they are creating a new free-will?

    I think the process of life with its unfathomably complex conscious interactions, its possibly variably arrow of time (pre-cognition?), its deeply reactive creativity and the fact that values are the bedrock of human existence make algorithms a dangerously reductive tool of submerged ideology.

    And finally, let's say that we create an algorithmic program that studies that data and concludes that growth-orientated technological based economies will destroy the biosphere and the only salvation is in value-laden human-scale steady-state economy which never employs algorithms at all. Will we pay any attention to that one or are they just tools for getting richer and disciplining the powerless?

    • M Collings

      Hear Hear Sir! Well said.

    • KaosGates

      Damn... speechless

    • G

      We already know that economic growthism is destroying the biosphere, and yet we do little to nothing about the growthism that's the underlying cause.

      Instead we create a new class of gods to replace the previous one: the Algorithm Gods and the Computer Gods, holy oracles of our times. But when they tell us what we can't bear to tell ourselves, which is that growthism is destroying the biosphere (see also _The Limits to Growth_), we ignore them. As we ignored the previous Gods when they told us to make peace and love one another and treat our enemies with compassion.

      Persons of good will, always find a rationale for doing good.

      Persons of evil will, always find a rationale for doing evil.

      They find their rationales in whatever is close at hand: a religion, a philosophy, a political ideology, or something else such as a computer or an algorithm. Same as it ever was... same as it ever was...

  • M Collings

    Its called abdicating responsibility for the very essence of what life is meant to be all about - choices.

    • ben

      when Lauda air lost a plane it took Nicki Lauda some weeks in a simulator to prove the computer program was faulty

      not much good for those that were lost


  • mirkolorenz

    I am an evangelist for "data-driven journalism" and well aware of the many perils that loom by applying the powerful wave of analysis tools that exist or are about to come.

    If you start looking for examples where data analysis is applied wrongly you easily find plenty. For example, the US intelligence did not know that the Russian economy was actually dwindling for a long time. For example, banks often have a false sense of both risk and security when investing, although they sit on huge piles of data.

    One aspect that I find missing in this very important discussion about perils and misapplication of analytical tools is that in the future these tools and apps could be "fiercely on our side" when aiming to predict outcomes and consulting us on actions. Like a watchdog, very loyal, very aggressive against potential break-ins into our privacy and wellbeing.

    Fiercely on our side is a quality that is seldomly (maybe nowhere) found in todays tech set-up. Almost everything is contaminated by the marketing/sales driven economy. Today we use a lot of services that provide a benefit, but in return for using these they tend to call home without our knowledge and reports about our actions (as it is described in this article).

    What if that would change? One idea here could be talk about a "trust economy", where breach of trust is the ultimate product/service failure.

    • ericangel

      A fascinating suggestion. I think you're on to something. It would be quite the transformation, and I can't picture how it might come about, but worth thinking about for sure. Thx.

      • mirkolorenz

        Eric Angel and Edwin Lee. Did see the comments, thx. Issue is that right now a marketing-driven approach that uses features that delivered via algorithms for data collection. Though the direction of that data (towards predicting our next buy) is a mono-culture. Agree with Edwin Lee how this works is a more or less constant offense (towards privacy).

        So, frankly, no answer how such a "trust economy" could work nor whether it would succeed. That would need (much) further thinking.

    • Edwin Lee

      In most cases, and I suspect this is one of them, technology tends to favor the offense rather than the defense. Those who are out to game the system can uncover and focus on weaknesses, the Achilles heels, which increase exponentially as a system's size and complexity increases. Defenders would have to find all weaknesses and correct them, which creates more weaknesses in the process, whereas the offenders need only find a weakness and exploit it.
      I don't see how we can ignore, or overcome the never ending evolutionary process of exploitation and defense. It too is what makes us human.

    • G

      "Fiercely on our side:" Brilliant.

      We already have bits and pieces of it today, in the form of crypto software, tracker-blocking software, and the like, but these require knowledge and skill and effort to use.

      Ideal case, something anyone can download and use with ease. One app that does "all of the above."

  • Ash

    These are good reasons why algorithms cant take over from humans - at the present time. In most of the cases described, the algorithm can be tweaked to better accommodate our preferences. Human reasoning that takes multiple factors including emotions into account is complex but computers should be able to emulate this in the future, and there's no reason to believe this will be an undesirable state of things. These problems are bumps in the road.

    • cuzzin

      bumps on the road that lead to where? the more algorithms take decisions for us, the less opportunity we have to grow as human beings. the only meaning algorithms will give to our life will be that of consumerism/materialism/emptiness.

      • Austin Duke

        No. Algorithms in and of them self won't bring meaning, they will instead afford everyone to find meaning in life for themselves. Imagine a world in which everything from food production to energy production is automated, imagine a world in which a human needn't lift a finger at a task they do not wish to. This is a world that algorithms can facilitate and it is a world in which you have the freedom to discover for yourself what you value.

        Your conclusion is groundless.

        • cuzzin

          Interesting that you would bring up food. The people I know who cook their food by pushing a button actually lead far less satisfying lives - using the time saved to watch more TV. I am glad that I am able to add value to my life by the meals I make for myself and others.

          I am all for algorithms liberating humans from mindless tasks, we just have to aware of what is meaningful and what isn't, lest we become mindless ourselves.

        • G

          Imagine a world where the owners of the computers and algorithms live like Roman gods, and seven billion humans are unemployed and unable to do anything to better their lot.

          So here's a moral quandary for you: How many dead African children, or for that matter dead British or American children, are worth another new ocean-liner-sized yacht for Larry Ellison or Sergey Brin or Mark Zuckerberg?

  • rameshraghuvanshi

    There is no wonder commuter defeated to Anand in chess .computer is machine never did mistake what you teach them.Only man can did mistake because he has emotions, sensations some weakness there are barrier in his concentration.

    • Jonathan_Briggs

      A chess computer is a giant look-up table dedicated to one task and one task only, it has no self awareness so cannot be distracted from the task in hand. It "looks" at the board, compares it to its memory, then calls up all known moves from that position, selects the one(s) that had the most successful outcome and repeats till checkmate...

  • Manuel Alves

    "It is also a moral operation"

    Can you argue that a human would make a better decision? I would not kill myself to save the bus...

    • ncgh

      Moral 'sense', as we hold it, is product of evolution. A group of behaviors (somewhat variable and logically inconsistent) which, over many generations, favored the genetic code that produced it. Not necessarily the best solution, but somewhat time tested.

  • lanvens

    Louis Vuitton Outlet Store -- The first in a number of live and online auctions to boost money for the Coach Purses Warhol Reason for Visual Arts in Nyc collected over $17 million Monday.

    The auction at Christie’s featured 354 functions Warhol ranging from prints to photographs, a few of which have not been seen by the public. Sale will begin in February.

    Leading the sale was “Endangered Species: San francisco bay area Silverspot,” a print that fetched over $1.Two million, Christie’s said. Prada Online Other highlights includedChanel Jewelry“Jackie,” a screen print and paper collage of Jacqueline Kennedy that sold more than $626,000, more than double its high estimate of $300,000.

    Christie’s said the auction saw a robust demand for unique photographs and prints with lots of exceeding high estimates, including “Self-Portrait in Fright Wig,” estimated at $12,000 to $18,000, which sold for $50,000.

    The inspiration said the money raised for its endowment from the sales would allow it to grow support of Louis Vuitton Online Store the visual arts, fulfilling Warhol’s purpose in Coach Purses establishing it.

    “The brand new level of global access to Andy Warhol’s work this series of sales makes possible, combined with the bolstering of our philanthropic base, makes this a significant moment for the Foundation as wll as for the world of art,” said Joel Wachs, President in the Andy Warhol Foundation Prada Online for the Visual Arts.

    Michael Straus, the foundation’s Chairman from the Board, welcomed the results of the first auction.

    “It has allowed us to improve our grant-making capacity at a time when the arts community needs support and contains engaged an ever-expanding audience with all the art of Andy Warhol,” Chanel Jewelry said.

  • Edwin Lee

    I think this is an excellent, thought provoking article to
    which I’d like to add a few thoughts. Computerized algorithms, some produced by
    humans and others by computers, will productively replace human judgments in
    more and more situations. It is a phenomenon similar to that of the industrial
    revolution where machines replaced craftsmen. Both revolutions are fueled by
    immediate social benefits and both have obsoleted some remarkable human skills.

    The algorithms will create winners and losers among humans.
    Some of the winners will get there by successfully gaming the computerized
    algorithms, or by gaming the human authors of those algorithms, just as some
    winners now game “human” algorithms in every day life. For example, the
    first screening of job applicants for large corporations is of their resumes by
    Human Resources people using crude templates to weed out obvious losers. An
    entire industry now teaches applicants how to write resumes to get through this
    screening process, regardless of the applicants true characteristics, qualities
    and experiences. The whole game is to get past this filter and reach the
    interview, for which the applicant has also been coached and for which most of
    those doing the interviews are poorly coached. If the process yields truly
    productive workers or executives more than 1/3 of the time, the company is
    indeed fortunate. However, even successful organizations must continually adapt
    its screening algorithms, and improve interview techniques, or its success rate
    will drop with time.

    The hiring process, whether it games humans performing their
    learned algorithms or games a computer program is an evolutionary one which
    requires constant adapting by both parties. It is similar to the game between
    microbes and our immune systems. Those who develop “adaptable” algorithms
    should study how the immune system operates. It's multiple layers, which include human choices in the outermost layer, are well conceived and adequately adaptable for the most likely changes over the long haul.

  • Alec

    So what's new? Surely the problem has been around since the first human jumped on a horse.

  • Josh

    A thoughtful article, thank you. I do wonder about one area not explicitly mentioned; the influence algorithms of all types will have on people's unconscious decisions and preferences. According to many neuroscientists decisions are generally made outside of conscious awareness and only later in the process are those same decisions "justified" by rationale thought. The ability of algorithms to identify and influence these "deeper" decision making processes may prove to be quite powerful. Of course, we (humans and algorithms both) would be co-evolving in a system where, at least for the near future, only one party would view itself as self-conscious. I actually am not sure whether this will result in a "positive" outcome or not, I suspect that prediction is currently beyond both human and algorithm.

    • Jonathan_Briggs

      See racing drivers. Their car control is often outside the rational in that they don't consciously handle tasks set them by a car on the limit of adhesion; corrections are part of the whole entity, sensory organs set in train physical actions as unthinking as the action of standing upright.

    • G

      Behaviourist psychologist B.F. Skinner already coined the phrase for that type of world:

      Beyond freedom and dignity.

  • mozibur ullah

    Although computers are autonomous - they are still programmed by human beings. When we're bemused by their failings we're in fact bemused by the failings of those who programmed them.

    Automation is great in so many ways - but also its a bit rubbish. Who doesn't prefer a properly cooked food to canned food? This is what canned information is.

    Computers fail in so many ways. The people that programme them are obseesed with the culture of convenience, clicks and spectacle.

  • Grumpy O. Man

    We could ask President Obama. He has links and shortcuts to answers on all sorts of moral problems.

  • Elizabeth Harney

    A human can never outsource a moral decision to a machine. By the choice to operate such a vehicle, the driver must take the responsibility of actions taken by the vehicle.

    • Jonathan_Briggs

      If the driver cannot, by design, over-ride the computer, then who is responsible? Who holds the moral high ground, the software designer(s)? The company that sells the software? The car manufacturer?
      Silly question really, because big money will ensure that whatever the failings of the software/hardware interface, the poor driver will be held responsible, it will be in the conditions of sale and/or the handbook...

  • A Kaleberg

    Marcus makes a pretty lame argument. Odds are you and the kids on the school bus will have no idea of what hit them. The self-driving car might have had a clue, but no time to react. That's what usually happens in perhaps a mere 99.998% of the cases. Let's face it, most of us don't have a lot of time for moralizing when approaching a large object at 80 mph.

    (This is as bad as that: There is a guy heavy enough to stop a runaway trolley car sitting on a bridge next to me. Do I muster up enough strength to move something that can stop a runaway trolley car and hurl him into its path to save a bunch of orphans, or do I use my new found super strength to stop the runaway trolley car myself? - Hey, with great power comes great responsibility.)

  • Nat Scientist

    Science, or the act of Observation, Hypothesis testing (curiosity and the cat) , exists to game and take the other side of the algorithms. Wonderful fun nibbling at the great river of given-up edge to habit and dogma.

  • ZZMike

    The "who to blame" question is really quite simple. If a tool or machine breaks and injures someone, the party held at fault is the maker (or, lawyers being what they are, the seller, the distributor, the repairman, ...).

    We already have algorithms handling a significant portion of the economy. They're called "HIgh-Speed Trading Algorithms". They buy and sell stocks a hundred times a minute, making tiny profits each time, which add up to significant gains over time.

    The most recent evidence of their machinations is the 200-point drop (and recovery) in the DJIA some weeks back, after some blockhead tweeted that the White House was under attack.

    That suggests that these algorithms monitor social networks.

    • Jonathan_Briggs

      But they can't think, and certainly not "Outside the box." How would a computer controlled car handle the situation I found myself in while having to perform an emergency stop owing to an accident in front of me on the motorway. As I slowed, I checked my rear-view mirror and realising that the driver behind would not have quite enough space to stop as he was going too fast, I drove up onto the central reservation; he stopped alongside me. Considering that I had saved him considerable amounts of money and grief, I was more than a little miffed when he gave me a dirty look!

    • G

      Thereby extracting every microgramme of surplus value from these markets, faster than any human can do.

      The result is to make it utterly pointless for any individual to own stock, as any potential profit in doing so will have been extracted upstream.

  • Jonathan_Briggs

    It is quite clear from the phraseology used by the car companies as they introduce more and more electronics designed to reduce driver input that the sole responsibility will lie with the "Driver". They talk of Driver Assistance, assistance by computer, not control. The hugely powerful motor manufacturing lobby, even more powerful than the NRA, will ensure that all the world's legislative bodies will place responsibility solely on the driver regardless of his ability to (even possibility of) wrest control from the computer in an emergency situation.

    • G

      Plus or minus the laws of physics.

      Envision a motorway full of self-driving cars, bumper-to-bumper at high speed, the epitome of 21st century transport.

      Now envision that a lorry up ahead has a tyre lose its tread, as often happens, or perhaps lose a portion of its load onto the road. Or perhaps a car has a front tyre blowout and suddenly swerves into the adjacent lane. Or perhaps a terrorist or vandal or random psychopath tosses a few caltrops out the window, where they puncture the tyres of a number of vehicles following behind. Or perhaps there is merely snow, with random patches of ice.

      No algorithm can overcome the limits of inertial mass and the coefficient of friction of rubber on tarmac.

      Yes, and the promoters of the self-driving cars will merrily crow about how self-flying helicopter ambulances will appear on the scene to rescue the dozens of injured and dying persons in the chain-reaction collision, and take them to hospital faster than you can say Heath Robinson.

  • David Malek

    "... create a class of ‘algorithmic auditors’ — trusted representatives of the public ..."

    You mean the equivalent of SEC which is supposed to be the trusted representative of the public in auditing financial institutions and transactions? Well, check this out: SEC clearly knows what HFT is. High Frequency Trading (Unsurprisingly, some algorithms that rig the stock exchange market). Have they done what they are supposed to do? I guess not: