Menu
Aeon
DonateNewsletter
SIGN IN
A medieval painting of people in a boat, engaging in animated conversation. One person plays a lute, with food and drinks on the table.

Detail from The Ship of Fools 1510-1515 by Hieronymus Bosch. Musée du Louvre, Paris. Photo by Getty

i

Not so foolish

We are told that we are an irrational tangle of biases, to be nudged any which way. Does this claim stand to reason?

by Steven Poole + BIO

Detail from The Ship of Fools 1510-1515 by Hieronymus Bosch. Musée du Louvre, Paris. Photo by Getty

Humanity’s achievements and its self-perception are today at curious odds. We can put autonomous robots on Mars and genetically engineer malarial mosquitoes to be sterile, yet the news from popular psychology, neuroscience, economics and other fields is that we are not as rational as we like to assume. We are prey to a dismaying variety of hard-wired errors. We prefer winning to being right. At best, so the story goes, our faculty of reason is at constant war with an irrational darkness within. At worst, we should abandon the attempt to be rational altogether.

The present climate of distrust in our reasoning capacity draws much of its impetus from the field of behavioural economics, and particularly from work by Daniel Kahneman and Amos Tversky in the 1980s, summarised in Kahneman’s bestselling Thinking, Fast and Slow (2011). There, Kahneman divides the mind into two allegorical systems, the intuitive ‘System 1’, which often gives wrong answers, and the reflective reasoning of ‘System 2’. ‘The attentive System 2 is who we think we are,’ he writes; but it is the intuitive, biased, ‘irrational’ System 1 that is in charge most of the time.

Other versions of the message are expressed in more strongly negative terms. You Are Not So Smart (2011) is a bestselling book by David McRaney on cognitive bias. According to the study ‘Why Do Humans Reason?’ (2011) by the cognitive scientists Hugo Mercier and Dan Sperber, our supposedly rational faculties evolved not to find ‘truth’ but merely to win arguments. And in The Righteous Mind (2012), the psychologist Jonathan Haidt calls the idea that reason is ‘our most noble attribute’ a mere ‘delusion’. The worship of reason, he adds, ‘is an example of faith in something that does not exist’. Your brain, runs the now-prevailing wisdom, is mainly a tangled, damp and contingently cobbled-together knot of cognitive biases and fear.

This is a scientised version of original sin. And its eager adoption by today’s governments threatens social consequences that many might find troubling. A culture that believes its citizens are not reliably competent thinkers will treat those citizens differently to one that respects their reflective autonomy. Which kind of culture do we want to be? And we do have a choice. Because it turns out that the modern vision of compromised rationality is more open to challenge than many of its followers accept.

For most of recorded thought it was taken for granted that rationality was what separated us from the beasts. Plato argued that hatred of reason (misology) sprang from the same source as hatred of humankind. Aristotle declared that man was ‘the rational animal’, and this seemed evident to Spinoza, too: just as ‘a dog is a barking animal’, so man was the beast who reasoned. Philosophers have, of course, long differed about the nature and limits of rationality. Kant argued against ‘rationalists’ such as Leibniz, who taught that pure reason could disclose the nature of reality. Hegel insisted that individual thinkers cannot escape their particular historical context, and Hume observed that reason alone cannot motivate action.

Nevertheless, until recently it was still largely assumed that rationality, whatever its character and limits, was a definitional aspect of humankind. Hence the despairing apotheosis of Romantic anti-rationalism in the later 20th century, when it seemed to many that the Enlightenment had led straight to the Gulag and the Holocaust: to decry the operation of reason was to take a pessimistic view of humanity itself. Today, however, we are told we can abandon the notion that rationality is central to human identity. But does the evidence show that we must?

It turns out that people are not in fact ‘rational’ in this homo economicus way

Modern skepticism about rationality is largely motivated by years of experiments on cognitive bias. We are prone to apparently irrational phenomena such as the anchoring effect (if we are told to think of some arbitrary number, it will affect our snap response to an unrelated question) or the availability error (we judge questions according to the examples that come most easily to mind, rather than a wide sample of evidence). There has been some controversy over the correct statistical interpretations of some studies, and several experiments that ostensibly demonstrate ‘priming’ effects, in particular, have notoriously proven difficult to replicate. But more fundamentally, the extent to which such findings can show that we are acting irrationally often depends on what we agree should count as ‘rational’ in the first place.

During the development of game theory and decision theory in the mid-20th century, a ‘rational’ person in economic terms became defined as a lone individual whose decisions were calculated to maximise self-interest, and whose preferences were (logically or mathematically) consistent in combination and over time. It turns out that people are not in fact ‘rational’ in this homo economicus way, and the elegant demonstration of this fact was the subject of Kahneman’s own research (with Tversky) over the following decades. Given choices between complex bets, for example, people often prefer mathematically inferior ones. Potential losses seem to loom more heavily in our minds than equal potential gains.

The thorny question is whether these widespread departures from the economic definition of ‘rationality’ should be taken to show that we are irrational, or whether they merely show that the economic definition of rationality is defective. If we adopt a wider sense of ‘rational’, some of our apparent cognitive hiccups don’t seem so silly after all. Take Kahneman and Tversky’s famous ‘Linda problem’.

Imagine you are told the following about Linda:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Now, which of these statements is more probable?

1. Linda is a bank teller.

2. Linda is a bank teller and is active in the feminist movement.

A majority of people say that 2 is more probable: that Linda is a bank teller and an active feminist. Well, as Kahneman points out, 2 cannot be more probable in the statistical sense, since there are many more bank tellers (the feminist ones plus all the rest) than there are feminist bank tellers. People who answer 2, he says, think it’s more probable that Linda belongs to a smaller population than to a larger, more inclusive one. Mathematically, this is just wrong. So Kahneman’s story is that we are primed by the irrelevant information about Linda’s personality to commit what he calls the ‘conjunction fallacy’, and so we make an ‘irrational’ judgment.

But this does not take into account some important nuances. Consider what the philosopher Paul Grice would have called the ‘conversational implicature’ of the puzzle as posed. According to Grice’s ‘maxim of relevance’, people will naturally assume that the information about Linda’s personality is being given to them because it is relevant. That leads them to infer a definition of ‘probability’ that is different from the strict mathematical one, because giving the mathematical answer would render the personality sketch pointless. (After all, we could reasonably wonder, why did they tell me this?) Could it be that respondents who give the ‘wrong’ answer are interpreting ‘probability’ as something more akin to narrative plausibility?

Tellingly, the psychologists Ralph Hertwig and Gerd Gigerenzer reported in 1999 that when you give people the same puzzle and ask them to guess about relative frequencies instead of what is more ‘probable’, they give the mathematically correct answer much more often. One might add that, if we are talking plausibility, the notion that Linda is a bank teller and an active feminist fits the whole story better. Arguably, therefore, it is a perfectly rational inference: all the available information is now consistent.

There are many other good reasons to give ‘wrong’ answers to questions that are designed to reveal cognitive biases. The cognitive psychologist Jonathan St B T Evans was one of the first to propose a ‘dual-process’ picture of reasoning in the 1980s, but he resists talk of ‘System 1’ and ‘System 2’ as though they are entirely discrete, and argues against the automatic inference from bias to irrationality. In a 2005 survey, he offers examples such as the following. Experimental subjects are asked:

If she meets her friend, she will go to a play.

She meets her friend.

What follows?

As one might expect, the vast majority of people (96 per cent in one study) give the correct logical inference: she goes to the play. But look what happens when an extra conditional statement is added:

If she meets her friend, she will go to a play.

If she has enough money, she will go to a play.

She meets her friend.

What follows?

Now, only 38 per cent of respondents said that she goes to the play. In strictly logical terms, the other 62 per cent were wrong. ‘In standard logic,’ Evans explains, ‘an argument that follows from some premises must still follow if you add new information.’ But the people who didn’t conclude that she goes to the play were not necessarily being irrational. Evans diagnoses their thought process like this: ‘The extra conditional statement introduces doubt about the truth of the first. People start to think that, even though she wants to go to the play with her friend, she might not be able to afford it.’ And so it is quite reasonable to be unsure whether she goes to the play.

In general, Evans concludes that a ‘strictly logical’ answer will be less ‘adaptive to everyday needs’ for most people in many such cases of deductive reasoning. ‘A related finding,’ he continues, ‘is that, even though people may be told to assume the premises of arguments are true, they are reluctant to draw conclusions if they personally do not believe the premises. In real life, of course, it makes perfect sense to base your reasoning only on information that you believe to be true.’ In any contest between what ‘makes perfect sense’ in normal life and what is defined as ‘rational’ by economists or logicians, you might think it rational, according to a more generous meaning of that term, to prefer the former. Evans concludes: ‘It is far from clear that such biases should be regarded as evidence of irrationality.’

One interesting consequence of a wider definition of ‘rationality’ is that it might make it harder to convict those who disagree with us of stupidity. In an article titled ‘Making Climate-Science Communication Evidence-Based’ (2013), Dan M Kahan, a professor of law and psychology, argues that people who reject the established facts about global warming and instead adopt the opinions of their peer group are being perfectly rational in a certain light:

Nothing any ordinary member of the public personally believes about […] global warming will affect the risk that climate changes poses to her, or to anyone or anything she cares about. […] However, if she forms the wrong position on climate change relative to the one [shared by] people with whom she has a close affinity – and on whose high regard and support she depends on in myriad ways in her daily life – she could suffer extremely unpleasant consequences, from shunning to the loss of employment. Because the cost to her of making a mistake on the science is zero and the cost of being out of synch with her peers potentially catastrophic, it is indeed

individually

rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group.

Of course, when one combines what Kahan stresses are ‘individually rational’ decisions into a group belief, one might judge that the group as a whole is being irrational in rejecting robust scientific evidence. This is, perhaps, an intellectual version of the tragedy of the commons: there, too, each individual acts ‘rationally’ according to self-interest (getting the most they can out of the shared resource), but the aggregate behaviour (overgrazing and therefore exhausting a piece of land, for instance) seems irrational. Perhaps it should not be surprising that, if individuals can act rationally or irrationally, groups can, too. Not all crowds are wise.

Nonetheless, there is surely empathetic value in the determination to discover what an individual gains by holding an apparently false belief. Kahan’s argument about the woman who does not believe in global warming is a surprising and persuasive example of a general principle: if we want to understand others, we can always ask what is making their behaviour ‘rational’ from their point of view. If, on the other hand, we just assume they are irrational, no further conversation can take place.

That, in a nutshell, is the problem of the practical application of behavioural economics to modern governance, in the form of nudge politics. Kahan argues against what he calls the ‘public irrationality thesis’: the idea that ordinary citizens act irrationally most of the time. He thinks this thesis is ungrounded, but the liberal-paternalist architects of nudge policy simply assume it – in, so they claim, our best interests.

The idea went mainstream with Nudge (2008), a book by the law professor Cass Sunstein and the economist Richard Thaler. Official policy, they suggest, should deliberately bypass the reflective or reasoning process in the citizenry. How? By designing ‘choice architecture’ in which the alternatives are precisely targeted at citizens’ cognitive weaknesses, ensuring that they will ‘automatically’ make the desired decision most of the time. So, for example, a school cafeteria should put healthy meals at eye level (following the strategy of supermarkets), while relegating ‘junk’ foods to harder-to-reach places. This manoeuvre targets laziness of immediate perception and action. And to get more organ donors, the state should automatically enrol everyone as a potential donor so that you have to ‘opt out’ if you really don’t want to be one, rather than having to opt in and register in the first place. This targets status quo bias.

The nudger is thus a kind of benevolent god, designing a garden maze that leads sinners to the right exit

Nudge was a huge hit in governments around the world. Sunstein was plucked from Harvard to become President Barack Obama’s regulation czar. The UK government set up a ‘Behavioural Insights Team’ (informally known as the ‘nudge unit’) in the Cabinet Office, later part-privatised, and similar approaches have been tried in France, Brazil, Australia and New Zealand.

So far, so apparently benign. Compared with some of Kahneman’s reasoning puzzles, it’s hard to see the mental tendencies targeted by nudges (status quo bias and so on) as rational-in-a-wider-sense. At best, they might be adaptive as rules of thumb for fast decision-making, but that doesn’t mean they will reliably result in sensible decisions – unless the choice architect, having unilaterally decided what should count as the ‘rational’ decision in a given context, sets up the environment in the right way. The nudger is thus a kind of benevolent god, designing a garden maze that leads sinners to the right exit. Nudge politics, in this way, may be seen as a development of dismaying tendences that the philosopher Alasdair MacIntyre noted as early as 1988:

The consumer, the voter, and the individual in general are accorded the right of expressing their preferences for one or more out of the alternatives which they are offered, but the range of possible alternatives is controlled by an elite, and how they are presented is also so controlled. The ruling elites within liberalism are thus bound to value highly competence in the persuasive presentation of alternatives, that is, in the cosmetic arts. (

Whose Justice? Which Rationality?

)

Nudging is far from being a dystopian tool of state mind control: we remain free, after all, to make the ‘wrong’ choices. Yet there is something troubling about the way in which it is able to marginalise political discussion. Is it always irrational to eat fatty food? What about organ donation: should we always be happy about doing that? These are murky questions and opinions might differ, but the architects of choice never have to consult the public about them. Thus the attempt to bypass our reasoning selves creates a problem of consent, a short-circuiting of democracy. Why bother having a political argument if you can make (most) people do what you want anyway?

Further objections arise as nudging techniques are increasingly allied with the surveillance capabilities of personal technology – for instance, smart cards that offer discounts on local taxes if citizens use them to go to the gym regularly. This might make it easier to blame individuals for their own poor health, or to increase their insurance premiums, because of their demonstrable and recorded bad behaviour. (‘For what else could possibly explain their health problems but their personal failings?’, the critic Evgeny Morozov asks sardonically. ‘It’s certainly not the power of food companies or class-based differences or various political and economic injustices.’) If refusing a nudge carries a financial or other penalty, how free does the nudged choice remain?

It is possible, however, that too great a faith in nudging is itself irrational. A 2011 report by the House of Lords Science and Technology Sub-Committee on the subject went off-message by concluding that ‘soft’ approaches such as nudging were not sufficient to tackle major social problems such as obesity or transport. Moreover, since nudging depends on citizens ordinarily following their automatic biases, its efficacy would be undermined if we could actually overcome our biases on a regular basis. And can we? That remains a matter of debate.

Many researchers think that we can improve our chances of employing rational processes in certain situations simply by reminding ourselves of the biases that might be triggered by the present problem. Kahneman considers this kind of ‘debiasing’ difficult to achieve reliably, but some of his colleagues in the field are more optimistic. One is the psychologist Keith E Stanovich, who in Rationality and the Reflective Mind (2011) prefers a tripartite picture of mental systems: the ‘autonomous’ (prey to biases), the ‘algorithmic’, and the ‘reflective’. In this way he distinguishes between intelligence narrowly conceived (whatever is measured by IQ tests or SAT scores), which is the business of the ‘algorithmic’ mind, and accurate reasoning, which he ascribes to the ‘reflective’ mind. And the good news, on his account, is that it is indeed possible to teach ‘rational thinking mindware and procedures’.

To do so would surely be an admirable exercise of public reason. But we can assume that it would also be very disappointing to the nudgers. Nudging depends on our cognitive biases being reliably exploitable, and a Stanovichian programme of mindware upgrades would interfere with that. In this sense, nudge politics is at odds with public reason itself: its viability depends precisely on the public not overcoming their biases.

Yet it is public reason that offers the most effective antidote to the whole climate of skepticism about our rational capacities. The one thing, after all, that ought to give the pessimists in behavioural economics and other fields hope is what they collectively prove so well: that the flaws of any one thinker can be corrected when reasoning is part of a conversation. The discourse on cognitive bias is itself conducted according to the highest standards of public rationality – and if that sounds like rather a hollow accolade by the lights of the bias theorists, recall that it was mere humans, reasoning in accordance with the same public standards of rationality, who managed to put a robot on Mars.

reasoning is the social institution whose reliability underwrites all the other kinds of civil and political institutions of civilised life

Indeed, even as he calls the ‘worship’ of reason a ‘delusion’, Jonathan Haidt celebrates humans’ ability to reason together. ‘If you put individuals together in the right way,’ he writes in The Righteous Mind, ‘such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system.’

Combining reasoning individuals ‘in the right way’ is important, so as to avoid the irrational effects introduced by phenomena such as group polarisation or informational cascades. Yet we are all familiar with various examples of the right way to combine individuals into public bodies capable of high-level reasoning: scientific societies, universities – even, sometimes, government debating chambers. Indeed, reasoning is the social institution whose reliability underwrites all the other civil and political institutions of civilised life.

And so there is less reason than many think to doubt humans’ ability to be reasonable. The dissenting critiques of the cognitive-bias literature argue that people are not, in fact, as individually irrational as the present cultural climate assumes. And proponents of debiasing argue that we can each become more rational with practice. But even if we each acted as irrationally as often as the most pessimistic picture implies, that would be no cause to flatten democratic deliberation into the weighted engineering of consumer choices, as nudge politics seeks to do. On the contrary, public reason is our best hope for survival. Even a reasoned argument to the effect that human rationality is fatally compromised is itself an exercise in rationality. Albeit rather a perverse, and – we might suppose – ultimately self-defeating one.