Automated ethics

by 2900 2,900 words
  • Read later or Kindle
    • KindleKindle

Automated ethics

From artist James Bridle's Drone Shadow series, Brixton, London. Photo courtesy James Bridle

When is it ethical to hand our decisions over to machines? And when is external automation a step too far?

Tom Chatfield is a writer and commentator on digital culture whose work has appeared in BBC Future, The Guardian and 99U, among others. His latest book is Live This Book! (2015). He lives in Kent.

2900 2,900 words
  • Read later
    • KindleKindle

For the French philosopher Paul Virilio, technological development is inextricable from the idea of the accident. As he put it, each accident is ‘an inverted miracle… When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution.’ Accidents mark the spots where anticipation met reality and came off worse. Yet each is also a spark of secular revelation: an opportunity to exceed the past, to make tomorrow’s worst better than today’s, and on occasion to promise ‘never again’.

This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.

Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.

Enter the cutting edge of machine mitigation. Back in August 2012, Google announced that it had achieved 300,000 accident-free miles testing its self-driving cars. The technology remains some distance from the marketplace, but the statistical case for automated vehicles is compelling. Even when they’re not causing injury, human-controlled cars are often driven inefficiently, ineptly, antisocially, or in other ways additive to the sum of human misery.

What, though, about more local contexts? If your vehicle encounters a busload of schoolchildren skidding across the road, do you want to live in a world where it automatically swerves, at a speed you could never have managed, saving them but putting your life at risk? Or would you prefer to live in a world where it doesn’t swerve but keeps you safe? Put like this, neither seems a tempting option. Yet designing self-sufficient systems demands that we resolve such questions. And these possibilities take us in turn towards one of the hoariest thought-experiments in modern philosophy: the trolley problem.

In its simplest form, coined in 1967 by the English philosopher Philippa Foot, the trolley problem imagines the driver of a runaway tram heading down a track. Five men are working on this track, and are all certain to die when the trolley reaches them. Fortunately, it’s possible for the driver to switch the trolley’s path to an alternative spur of track, saving all five. Unfortunately, one man is working on this spur, and will be killed if the switch is made.

In this original version, it’s not hard to say what should be done: the driver should make the switch and save five lives, even at the cost of one. If we were to replace the driver with a computer program, creating a fully automated trolley, we would also instruct it to pick the lesser evil: to kill fewer people in any similar situation. Indeed, we might actively prefer a program to be making such a decision, as it would always act according to this logic while a human might panic and do otherwise.

The trolley problem becomes more interesting in its plentiful variations. In a 1985 article, the MIT philosopher Judith Jarvis Thomson offered this: instead of driving a runaway trolley, you are watching it from a bridge as it hurtles towards five helpless people. Using a heavy weight is the only way to stop it and, as it happens, you are standing next to a large man whose bulk (unlike yours) is enough to achieve this diversion. Should you push this man off the bridge, killing him, in order to save those five lives?

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

Pushing someone to their death with your bare hands is deeply problematic psychologically, even if you accept that it’s theoretically no better or worse than killing them from 10 miles away. Meanwhile, allowing someone at a distance – a starving child in another country for example – to die through one’s inaction seems barely to register a qualm. As philosophers such as Peter Singer have persuasively argued, it’s hard to see why we should accept this.

Great minds have been wrestling with similar complexities for millennia, perhaps most notably in the form of Thomas Aquinas’s doctrine of double effect. Originally developed in the 13th century to examine the permissibility of self-defence, the doctrine argues that your intention when performing an act must be taken into account when your actions have some good and some harmful consequences. So if you choose to divert a trolley in order to save five lives, your primary intention is the saving of life. Even if one death proves unavoidable as a secondary effect, your act falls into a different category from premeditated murder.

The doctrine of double effect captures an intuition that most people (and legal systems) share: plotting to kill someone and then doing so is a greater wrong than accidentally killing them. Yet an awkward question remains: how far can we trust human intuitions and intentions in the first place? As the writer David Edmonds explores in his excellent new book Would You Kill the Fat Man? (2013), a series of emerging disciplines have begun to stake their own claims around these themes. For the psychologist Joshua Greene, director of Harvard’s Moral Cognition Lab, the doctrine of double effect is not so much a fundamental insight as a rationalisation after the fact.

In his latest book Moral Tribes (2013), Greene acknowledges that almost everyone feels an instinctual sense of moral wrong about people using personal force to harm someone else. For him, this instinctual moral sense is important but far from perfect: a morsel of deep-rooted brain function that can hardly be expected to keep up with civilisational progress. It privileges the immediate over the distant, and actions over omissions; it cannot follow complex chains of cause and effect. It is, in other words, singularly unsuitable for judging human actions as amplified by the vast apparatus of global trade, politics, technology and economic interconnection.

Here, Greene’s arguments converge with the ethics of automation. Human beings are like cameras, he suggests, in that they possess two moral modes: automatic and manual. Our emotions are ‘automatic processes… devices for achieving behavioural efficiency’, allowing us to respond appropriately to everyday encounters without having to think everything through from first principles. Our reasoning, meanwhile, is the equivalent of a ‘manual’ mode: ‘the ability to deliberately work through complex, novel problems’.

If you can quantify general happiness with a sufficiently pragmatic precision you possess a calculus able to cut through biological baggage and tribal allegiances alike

It’s a dichotomy familiar from the work of behavioural economist Daniel Kahneman at Princeton. Unlike Kahneman, however, Greene is an optimist when it comes to overcoming the biases evolution has baked into our brains. ‘With a little perspective,’ he argues, ‘we can use manual-mode thinking to reach agreements with our “heads” despite the irreconcilable differences in our “hearts”.’ Or, as the conclusion of Moral Tribes more bluntly puts it, we must ‘question the laws written in our hearts and replace them with something better’.

This ‘something better’ looks more than a little like a self-driving car. At least, it looks like the substitution of a more efficient external piece of automation for our own circuitry. After all, if common-sense morality is a marvellous but regrettably misfiring hunk of biological machinery, what greater opportunity could there be than to set some pristine new code in motion, unweighted by a bewildered brain? If you can quantify general happiness with a sufficiently pragmatic precision, Greene argues, you possess a calculus able to cut through biological baggage and tribal allegiances alike.

The GMXO Compactable Car designed by Ali Jafari can be driverless and stretches or compacts depending on parking and passenger requirements. Photo courtesy GM The GMXO Compactable Car designed by Ali Jafari can be driverless. It stretches or compacts depending on parking and passenger requirements. Photo courtesy GM

Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?

Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.

For the moment, machines able to ‘think’ in anything approaching a human sense remain science-fiction. How we should prepare for their potential emergence, however, is a deeply unsettling question – not least because intelligent machines seem considerably more achievable than any consensus around their programming or consequences.

Consider medical triage – a field in which automation and algorithms already play a considerable part. Triage means taking decisions that balance risk and benefit amid a steady human trickle of accidents. Given that time and resources are always limited, a patient on the cusp of death may take priority over one merely in agony. Similarly, if only two out of three dying patients can be dealt with instantly, those most likely to be saved by rapid intervention may be prioritised; while someone insisting that their religious beliefs mean their child’s life cannot be saved may be overruled.

On the battlefield, triage can mean leaving the wounded behind, if tending to them risks others’ lives. In public health, quarantine and contamination concerns can involve abandoning a few in order to protect the many. Such are the ancient dilemmas of collective existence – tasks that technology and scientific research have made many orders of magnitude more efficient, effective and evidence-based.

If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear in advance together with its exact parameters

What happens, though, when we are not simply programming ever-nimbler procedures into our tools, but instead using them to help determine the principles behind these decisions in the first place: the weighting of triage, the moment at which a chemical plant’s doors are automatically sealed in the event of crisis? At the other end of the scale, we might ask: should we seek to value all human lives equally, by outsourcing our incomes and efforts to the discipline of an AI’s equitable distribution? Taxation is a flawed solution to the problem; but, with determination and ingenuity, brilliant programs can surely do better. And if machines, under certain conditions, are better than us, then what right do we have to go on blundering our way through decisions likely only to end badly for the species?

You might hesitate over such speculations. Yet it’s difficult to know where extrapolation will end. We will always need machines, after all, to protect us from other machines. At a certain point, only the intervention of one artificially intelligent drone might be sufficient to protect me from another. For the philosopher Nick Bostrom at Oxford, for example, what ought to exercise us is not an emotional attachment to the status quo, but rather the question of what it means to move successfully into an unprecedented frame of reference.

In their paper ‘The Ethics of Artificial Intelligence’ (2011), Bostrom and the AI theorist Eliezer Yudkowsky argued that increasingly complex decision-making algorithms are both inevitable and desirable – so long as they remain transparent to inspection, predictable to those they govern, and robust against manipulation.

If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear in advance together with its exact parameters. Society can then debate these, set a seal of approval (or not) on the results, and commence the next phase of iteration. I might or might not agree, but I can’t say I wasn’t warned.

What about worst case scenarios? When it comes to automation and artificial intelligence, the accidents on our horizon might not be of the recoverable kind. Get it wrong, enshrine priorities inimical to human flourishing in the first generation of truly intelligent machines, and there might be no people left to pick up the pieces.

Unlike us, machines do not have a ‘nature’ consistent across vast reaches of time. They are, at least to begin with, whatever we set in motion – with an inbuilt tendency towards the exponential. As Stuart Armstrong, who works with Nick Bostrom at the Future of Humanity Institute, has noted: if you build just one entirely functional automated car, you now have the template for 1 billion. Replace one human worker with a general-purpose artificial intelligence, and the total unemployment of the species is yours for the extrapolating. Design one entirely autonomous surveillance drone, and you have a plan for monitoring, in perpetuity, every man, woman and child alive.

In a sense, it all comes down to efficiency – and how ready we are for any values to be relentlessly pursued on our behalf. Writing in Harper’s magazine last year, the essayist Thomas Frank considered the panorama of chain-food restaurants that skirts most US cities. Each outlet is a miracle of modular design, resolving the production and sale of food into an impeccably optimised operation. Yet, Frank notes, the system’s success on its own terms comes at the expense of all those things left uncounted: the skills it isn’t worth teaching a worker to gain; the resources it isn’t cost-effective to protect:
The modular construction, the application of assembly-line techniques to food service, the twin-basket fryers and bulk condiment dispensers, even the clever plastic lids on the coffee cups, with their fold-back sip tabs: these were all triumphs of human ingenuity. You had to admire them. And yet that intense, concentrated efficiency also demanded a fantastic wastefulness elsewhere – of fuel, of air-conditioning, of land, of landfill. Inside the box was a masterpiece of industrial engineering; outside the box were things and people that existed merely to be used up.

Society has been savouring the fruits of automation since well before the industrial revolution and, whether you’re a devout utilitarian or a sceptical deontologist, the gains in everything from leisure and wealth to productivity and health have been vast.

As agency passes out of the hands of individual human beings, in the name of various efficiencies, the losses outside these boxes don’t simply evaporate into non-existence. If our destiny is a new kind of existential insulation – a world in which machine gatekeepers render certain harms impossible and certain goods automatic – this won’t be because we will have triumphed over history and time, but because we will have delegated engagement to something beyond ourselves.

Time is not, however, a problem we can solve, no matter how long we live. In Virilio’s terms, every single human life is an accident waiting to happen. We are all statistical liabilities. If you wait long enough, it always ends badly. And while the aggregate happiness of the human race might be a supremely useful abstraction, even this eventually amounts to nothing more than insensate particles and energy. There is no cosmic set of scales on hand.

Where once the greater good was a phrase, it is now becoming a goal that we can set in motion independent of us. Yet there’s nothing transcendent about even our most brilliant tools. Ultimately, the measure of their success will be the same as it has always been: the strange accidents of a human life.

Read more essays on automation & robotics, computing & artificial intelligence, ethics and future of technology


  • Lester

    The funny thing is that these moral conundrums are so often described in neutral terms, an innocent third party happens upon a case and has to make a decision. So I'm just wandering along and see the trolly and the fat man and are thus presented with a moral puzzle, or I'm just lolling about my house "allowing" children to die at a distance by not taking action.

    But in actuality life is a complex system of interlocking consequences and infinite externalities that happen because of previous patterns of complexity of which I am an essential player. So in an obvious way children are not dying in far off places by some accident but as a direct result of policies enacted by the country that I live in and appear in some analyses to benefit from. The action has already been taken that causes their deaths.

    So where is the place of intervention? To save children dieing of starvation should I travel to where they are, or should I engage in political activity in the center countries where the political pebble is dropped creating the waves of agony further out in the pattern?

    These questions also seem to implicitly assume free will. Am I ever able to take a decision in any case? To what extent can I ever calculate the entirety of any action and then go on to calculate the hierarchy of consequences and then intervene to create my preferred outcome - remembering it can only ever be my preferred outcome because what seems tragic to me is a huge opportunity for something/someone else.

    And, finally, isn't there an assumption that there exists an beneficent morality that can be appealed to in all cases, such that it can be programmed into devices? That there is always a clear correct thing to do. But this is a huge assumption infused with a sort of materialist view that objective fact always exists if only we could dig deep enough. Maybe it doesn't. Maybe two answers can be equally right even when they are contradictory.

    • BDewnorkin

      1. The trolly problem is a tool for intuition-pumping, not a purportedly accurate depiction of real-life scenarios. You rightly point out the complexity and unknowability of the consequences of our actions, but these consequences – especially if limited to the small list of concrete concerns that you actually present – are not beyond moral calculus. In fact, it's precisely the time-consuming nature of this calculus and its unrealistic application to every day-to-day decision that calls for computer-automation.

      2. It all depends on what you mean by free will (based on your discussion in the same paragraph, though, it seems you're less concerned about free will than about the difficulty of moral calculus, to which I responded above). To the extent that a free will justifying moral responsibility is assumed, few would contend its existence to the point of justifying nihilism.

      3. There is indeed an assumption of moral objectivism, a contentious meta-ethical position. But I think Chatfield points out an important point: automation is already happening, and its consequences already affecting our lives, without much moral scrutiny; perhaps taking up the matter more seriously is the one beneficial consequence of automation's further encroachment into human decision making.

      • Lester

        I think your separating the inseparable.

        Even talking about morality in terms of "calculus" assumes the capacity for a rational overriding, or as I said previously, the hope for a morality
        deeply embedded in all possibilities that can be mined through reason,
        if only we reasoned hard enough, that would solve problems whatever they may be.

        I think this is overly optimistic.

        There is also a difference between nihilism and a recognition that the
        possibility for an objective basis for truth is cultural rather than
        definitive. I don't argue for nothingness. I suggest that the complexity
        is deeper than we can program or easily rationalize given the limited
        information we are always part to. That's why I suggest there might
        always be two contradictory yet truthful positions.

        And finally, it's not that I'm less concerned with free will
        compared with calculus. It's that I'm not certain that free will should
        be embedded into an argument that does not need it. There is no
        Meta-Morality without culture (unless your a theist of some sort) so why insert the notion whilst rejecting the idea of a deeper truth.

        • BDewnorkin

          "I think your [sic] separating the inseparable." Elaborate.

          Moral calculus does not assume moral realism, the position that morality must be "mined" from "possibilities." There are entirely subjective conceptions of morality that enable moral axiomatization and the applications of these axioms to practical circumstances via instrumental reasoning. This is what I mean by "moral calculus."

          Neither "complexity" and lack of "easy rationalizability" justifies your claim that "an objective basis for truth is cultural rather than definitive." Both, on the other hand, quickly degenerates into moral irresponsibility (e.g. a hit-and-run driver can deny criminal offense on the basis of the existence of "two contradictory yet truthful positions"), which – not "nothingness" – is what I mean by "nihilism."

          I'm also not accusing you of being "less concerned with free will compared with calculus" (see original response).

          I'm still not sure what you mean by free will, so I remain skeptical that, as you claim, Chatfield "embedded [it] into [his] argument." (Also, what "deeper truth" is Chatfield "rejecting?")

          • Lester

            Thanks for elaborating what you meant by moral calculus, although I did understand you in your first post.

            I'm afraid that I remain unconvinced, not for lack of comprehension, but rather just because I don't agree with it. I'm convinced that reason will only take us so far. This does not mean I am against reason, or that I might succumb to confusion when presented with such an elementary conundrum as the hit-and-run drivers spurious plea.

            Let me put it this way - morality is always parochial, it changes with time, it morphs according to culture, it weighs up outcomes that are always unknowable in anything but the most obvious short-term (and even then uncertainty looms), it's blind to outcomes beyond it's spotlight attention. Morality at any given time, like history, is often a reflection of the dominator group. Morality is a very slippery customer so I fully understand why one may be inclined to administer rules and algorithms and hope to squeeze it into a manageable position. Your
            suggestion that without care and attention things will, as in the
            familiar bandwagon argument, run away into moral irresponsibility reveals the underlying inclination and desire to outsource the dirty business of morality within the chaos of the fallible human to a kind technologically driven perfection.

            But in real life (which is the true destination of these moral conundrum exercises after all, unless they are to be mere tools for intellectual posturing), the information reason can ever be party to is insufficient to make a full judgement that will hold fast to any arbitrary rules. Consequently the techno-millenarian fantasy that we can outsource the messy beauty of life to an algorithm is fool-hardy. The authors hope that "Where once
            the greater good was a phrase, it is now becoming a goal that we can set in motion independent of us" is just depressing. There is no "independent of us". No morality that exists outside of the constantly constructed perpetually changing chaotic morality that emanates from us.

            I'm very suspicious of attempts to regulate behaviour and I
            have faith that people function far better when spontaneous,
            non-volitional action is followed. This in fact leads not the nihilism you fear, but to far more benevolent non-discriminatory behaviour.

            I realise that this is unpopular these days.

          • BDewnorkin

            "I realise that this is unpopular these days."
            Quite to the contrary, national surveys suggest that your, on one level, pluralistic and relativist position is well on the rise.

            "[O]r that I might succumb to confusion when presented with such an elementary conundrum as the hit-and-run drivers spurious plea"
            I'm arguing that your position brings about not confusion but moral irresponsibility. If, as you say, "morality is always parochial, it changes with time, it morphs according to culture" and it "weighs up outcomes that are always unknowable in anything but the most obvious short-term," then a hit-and-run driver can claim that in his culture his action is perfectly justifiable or that the hit-and-run actually creates long-term benefit (e.g. it saves him legal fees, which he can then donate to charity). You may call his BS, but then you're not part of his culture and he might indeed donate a considerable sum to an effective charity, so you'd be wrong to do so.

            "[P]eople function far better when spontaneous, non-volitional action is followed. This in fact leads not the nihilism you fear, but to far more benevolent non-discriminatory behaviour."
            This proposition assumes that "benevolent non-discriminatory behaviour" is good. It's, in other words, a rigid value statement. If you indeed believe what you've claimed above, then this value is only true in the context of your culture and your parochial view of the world. You'll have nothing to argue against my position – coming from a different culture with a different world view – which rejects "benevolent non-discriminatory behaviour" as a good thing. This is the nihilism that I'm talking about, and it seems to flow uninhibited from your claims.

          • Lester

            Before she died, anthropologist Margaret Mead said
            that her greatest fear was that as we drifted towards
            this blandly amorphous generic world view
            not only would we see the entire range of the human imagination
            reduced to a more narrow modality of thought,
            but that we would wake from a dream one day
            having forgotten there were even other possibilities.

            This is what globalised managerialism leads to, and what the approach to moral complexity you're advocating leads to. I agree that nihilism can be created, but it blows across the world from reasoned countries that colonise the landscape of morality and demand the right to administer legal application. I realise that pluralism and relativism are not supposed to be flattering characterisations, but by saying that two or more answers can be applied to the same question is not the same thing. David Bohm once said that there are no facts, only insights, and science is the methodology of gaining better and better insights. I agree. I apply he same approach to morality.

            Not only is more evolutionarily sound to encourage diversity in all aspects, cultures, morals or species, but even if that were not the case, we have no capacity to do otherwise.

          • BDewnorkin

            Mead's claim echoes across the halls of anthropology departments everywhere. Indeed, it's the intuitive reaction following any serious engagement with a diversity of human conscious experience; in studying the prism of another's Being, the anthropologist inevitably breaks previously unacknowledged barriers of her own existence.

            But it's a serious leap of fancy (and disposal of intellectual rigor) to move from a position of accepting human potentiality to rejecting all loci of shared meaning. Normativity is, at its heart, simply the means by which human beings evaluate their experiences and compare alternative imagined futures in order to direct their own action. It's an "evolutionarily" necessary capacity and one that you've used repeatedly throughout our conversation.

            You'll note that, throughout my responses, I've not advocated for any objective or realist moral position. I've not done so because, indeed, allegiance to these positions have impoverished moral thinking. What I'm reacting to, instead, is both your (and, apparently, Mead's) deconstruction-without-reconstruction. I am, after all, still *for* reason, and therefore skeptical of your suggestion that "[m]aybe two answers can be equally right even when they are contradictory."

  • BDewnorkin

    I hope someone (perhaps Chatfiled himself?) can help me understand Chatfield's two puzzling concluding paragraphs. It seems he's at first suggesting the inevitability of unintended negative consequences, but then turns sharply toward fanciful nihilism ("the aggregate happiness of the human race... eventually amounts to nothing more than insensate particles and energy"), which, if indeed a position that he holds, undermines the preoccupation underlying the first 2k or so words of his essay.

  • tetriminos

    " the secrets of the leaves ---- cloud the uncertainties "

  • neil21

    Surely if a robotaxi is in an at-all-uncertain situation, it will slow. Only when the road ahead is completely clear by some distance and protected by highway barriers would it accelerate to a fatally dangerous (>30kmh) speed. And if something unexpectedly crosses the path perpendicularly, it'll react a heck of a lot quicker than a human, but in a similar way: slam on the brakes.

    I don't see how the moral dilemma scenario could occur. Do elevator doors have an ethics chip to decide whether to crush your hand?

    • G

      Surely you have more faith in software than is merited by objective assessment. Has your computer ever crashed? Know anyone whose computer was invaded by malware? How does the prospect of malware on the motorway grab you?

      No robotomobiles for me. I drive aware, I do not use mobile devices whist driving, I slow for pedestrians, cyclists, schoolbuses, dustcarts, etc., and the only accidents I've ever been in were a) getting rear-ended by a tailgater when traffic stopped suddenly (about 15 years ago) and b) sliding off a road onto the verge due to black ice that formed before the gritters could get to it (about 15 years prior to that). No combination of software and sensors could have prevented either of those. Arguably, a robotomobile would have made the latter one worse.

      Lastly, I do not trust Google one bit. If you think NSA and GCHQ have had some scandals lately, just wait until Google has its own version of Edward Snowden.

  • G

    The trolly problem is also a tool for psychopath recruitment, as two further and similarly-structured examples illustrate:

    1) A terrorist group threatens to detonate an atomic bomb in a major city. If you had one member of the group in custody, would you torture him to reveal the location of the bomb? You would? Congratulations, you've just been recruited to transgress your morals, so now you can't criticise elected officials and candidates for office who champion the use of torture.

    2) A terrorist group threatens to detonate an atomic bomb in a major city unless 100 people per day are rounded up and shot in public each day for a month. A million casualties vs. a mere three thousand, is an even better tradeoff percentage-wise than one fat man vs. five people on the trolly track. What if the demand was for 100 Jews each day? 100 children each day? 1,000? According to the algorithm it's still a viable tradeoff. How many does it take before it's not?

    Upon answering those questions, every smart psychopath will welcome you to their ranks with a slap on the back: 'See, that wasn't so hard!, now you're one of us!'

    The underlying problem is 'thinking inside the box.' The number of real-world instances that translate to a trolly problem is vanishingly small. The overwhelming majority of such 'dilemmas' is created through in-the-box thinking and failure to anticipate consequences. Lock out the damn trolly track before the crew of workers start working on it, and if need be, preventively park a lorry load of sand on the track to intercept any runaways. Drive slowly enough to stop when you see a schoolbus or a dustcart on the road. Etc.

    For those who care to think for themselves, there's another way to look at those trolly problems: a way that's based on a more inclusive morality and that's also highly subversive to the status quo:

    Ask yourself: What would it have taken to prevent that trolly problem circumstance from arising in the first place? Then demand that the solution be put in place NOW.

  • G

    Every time your computer hangs, crashes, gets a virus, or (etc.), ask yourself this:

    What if that computer was piloting me down a motorway at high speed right now, or through a crowded city centre at lunch hour?

  • Oz

    I prefer that automated complexity not continue to creep its way into the dimensions of my personal psychology and corresponding daily routine.
    It's as simple as that, i.e. my preference is the justification.

  • Tom Chatfield

    Many thanks for the high quality of comments on my piece. A few thoughts that I hope bounce interestingly off the responses below.

    First, something I could have spelled out more clearly. When Greene and others critique our intuitions for being dangerous because they bring us to "automatic" conclusions which deeper utilitarian consideration would reject, I'm always struck by how powerful an argument against automation of other kinds this ought to be.

    if we're worried about the moral implications of the unconsidered, "automated" tendencies baked into our own brains, how much more cautious should we be about enshrining any ethically significant assumptions into eternal digital flesh, where they can be scaled instantly to vast scales - and potentially slip beneath the consideration of any number of human users, for whom a system's assumptions become a kind of reality in its own right? ("Computer says no")

    Even if you believe that morality is always parochial, this should surely still be of concern - because automated systems potentially allow us to inflict our parochial values on far more other people, for far longer, in far more enduring and unaccountable a way, than almost anything else.

    Hence my last points: that we should be very cautious indeed in choosing any values to be pursued relentlessly our behalf by external agents, not least because plenty of this is already going on and has been for centuries (and remains hugely disputed in its benefits).

    We can't opt out of the present, but we can attempt more rigorously to evaluate - and iterate, and hold accountable - those systems moulding our behaviours and defining many of our freedoms within it. See for example this excellent paper sent to me by Noah Goodall - a pragmatic, detailed examination of what ethically-informed design for automated cars might look like:

    Finally, in case I was being obscure or overly allusive in my final few paras, what I intend is not hand-waving nihilism, but a simple point of caution: that we should be wary when system designers come to us invoking utilitarian calculations of ultimate human good, given that the "ultimate" or even moderately distant consequences of pretty much anything are not directly accessible to us, let alone to calculation according to a single metric.

    Better, with philosophers like Peter Singer, to look at the world as it is, today, and ask: "what can I do to help here?" -

    • BDewnorkin

      Thanks for your clarification, with the main argument in which I completely agree.

      I would encourage expanding the scope of consideration of the question at hand beyond the technological. Even Goodall's three-phase, "developmental" approach to programming, as she admits, has limitations, limitations that threaten to severely impoverish moral (and legal) responsibility. A means of public (e.g. regulatory) or consumer (e.g. imposed manual control in high-risk scenarios) intervention seems imperative for its preservation, and such interventions cannot be left to private enterprise alone.

  • Margaret Wertheim

    What's missing here is a discussion of our right to be imperfect, inept and unhappy. Who says the greatest measure of "happiness" is the greatest good? What is "happiness" anyway? Many of the most important experiences of my life came through making mistakes. What if machines take away out chance to make mistakes - and thus learn from them. Virilio is right - the glass of red wine is always out there waiting to intersect with the pure white couch. Perhaps rather than obsessing about eliminating mistakes, we need to learn to live more peacefuly with them. That said I live in LA and hate driving, so I welcome the self-driving car with open arms. Think of all the books on ethics I could read while the car was in control.

    • Tom Chatfield

      My falling short of all kinds of happiness is indeed my business, and something I wouldn't have you steal from me at any price. But the potentially lethal consequences of my ineptness behind the wheel for your child, say, are a different and more troubling proposition.

      Much like the negligence or incompetence of a doctor botching minor surgery - and the protocols and algorithm-led oversight that might stop this from happening - we can't simply step away from ideas of the greater good, when couched in terms of minimising harm. And this is where the trouble begins, for me, together with opportunity - because given all that we may some day soon be offered by technology, where it should end becomes a horribly hard line to draw.

      How prepared would I be, in practice, for advanced machines to alter the difficulty setting of life - so to speak - for my son, or his children, or their children, and convert their potentially fatal mistakes into minor scrapes? It's not as if I would dream of turning back the clock and doing, myself, without seatbelts and airbags when driving my child in my own vehicle.

      As to what the greater good constitutes, this is a question whose complexities I hope I have at least gestured towards - with a good deal of scepticism towards monolithic answers, or resolutions that pretend to anticipate all time and consequences.

      It doesn't mean, though, that there's no place for rigorous observation, analysis and prevention based on assessments as thorough as we can manage of what does and doesn't make us less likely to suffer (or cause) harm. It just means this is a tricky business best conducted in the light of debate, accountability, and scientific as well as philosophical rigour.

  • thesafesurfer

    I will continue to limit as much as possible the ability of any machine I own or am using to operate independent of my will.

  • Chen Qin

    Are the lives of 5 man really worth more than the life of a single man? The trouble with ethics is that it is not a maths problem where its obvious that 5p > 1p, because, the value of p here, are not equal. What if that one person is a medical researcher who is about to make a break through with a medicine that can save millions?

    Since that we humans unfortunately lack the ability to look into the future, I do not believe that it is possible to determine a Right or Wrong answer here.

    Similarly, for the case of whether the car should prioritise saving the driver or multiple others, I do not believe that there is a hard answer. Ultimately, it will be a compromise between the political power of those who drive and those who don't. That is the reality and its the best that can be achieved.

  • Don DeHart Bronkema

    The logic of syntelligence is ineluctable...

  • Don DeHart Bronkema

    We can already scan the brain for guilty knowledge of crime; juries could be traded for panels of expert judges, who could assess facts & impose sentences.

  • flameaway

    Would I kill the fat mat? Yes I would, but I believe that the correct ethical decision would also include throwing myself after him.

    Weare not certain his weight shall be enough -- not in that moment. We only percieve that the wieght must be sufficient. Better to err on the side of too much wieght, given that the stakes are five other people's lives...

    Even should I know with a certainty that the fat man is of sufficient wieght another argument remains requiring my own death.

    Havingdecided to murder someone, I am now an inferior moral agent from my own perspective. Having once decided to murder, part of the sacrifice must also be my own life in order to defend the public from any further depredations on my part. In addition, if I also sacrifice my own life, I have simultaneously given my life to pay for the life I've taken. This means I end up getting "credit" for the lives I saved and perhaps even become an inspiring figure, influencing others toward selfless decisions in the face of tough ethical decisions.

    The end result being that I in fact have made an ethical decision and having "paid" for it retain my status as a positive ethical agent. The fact that I took the fat man with me thus resolves to 2 lives for 5. Leaving a direct reference back to the needs of the many outweigh those of the few.

  • Odyssios

    Life is not a trolley problem. Not sure what it is, but it's not a trolley problem. I hope this is reassuring. Life's problems have better production values, for one thing. Now on the spectrum from Possibilities Grotesque & Horrible to Possibilities Delightful and Liberating - where do you put the Child? Each one of whom encompasses a breathaking range of possible developments. To create a radical new technological development - or a child. Which dare you do?! Having done a bit of both, I had more fun with the kid. Which doesn't answer my question! But seriouly, folks ... an anutonomous robot has exactly the same responsibilities as an autonomous anything else - autonomy means 'I make the rules for my behavior.' Which is indistinguishable form being a person. AS the phrase has it, 'A moral agent.' With all the Rights & Privileges we graciously award outselves, as such.