Scientific experiments don’t generally attract widespread attention. But the ‘Gorillas in Our Midst’ (1999) experiment of visual attention by the American psychologists Daniel Simons and Christopher Chabris has become a classic. In his book Thinking, Fast and Slow (2011), the Nobel laureate Daniel Kahneman highlights this experiment and argues that it reveals something fundamental about the human mind, namely, that humans are ‘blind to the obvious, and that we also are blind to our blindness’. Kahneman’s claim captures much of the current zeitgeist in the cognitive sciences, and arguably even provides a defining slogan of behavioural economics: in turn, as the economist Steven Levitt put it, ‘that one sentence summarises a fundamental insight’ about the life’s work of Kahneman himself. The notion of prevalent human blindness also fuels excitement about artificial intelligence (AI), especially its capacity to replace flawed and error-prone human judgment.
But are humans truly blind to the obvious? Recent research suggests otherwise. It suggests that this claim – so important to much of the cognitive sciences, behavioural economics, and now AI – is wrong. So, how could such an influential claim get it so wrong?
Let’s start with a careful look at Simons and Chabris’s classic experiment, and see how it might suggest something different, and more positive, about human nature. In the experiment, subjects were asked to watch a short video and to count the basketball passes. The task seemed simple enough. But it was made more difficult by the fact that subjects had to count basketball passes by the team wearing white shirts, while a team wearing black shirts also passed a ball. This created a real distraction. (If you haven’t taken the test before, consider briefly taking it here before reading any further.)
The experiment came with a twist. While subjects try to count basketball passes, a person dressed in a gorilla suit walks slowly across the screen. The surprising fact is that some 70 per cent of subjects never see the gorilla. When they watch the clip a second time, they are dumbfounded by the fact that they missed something so obvious. The video of the surprising gorilla has been viewed millions of times on YouTube – remarkable for a scientific experiment. Different versions of the gorilla experiment, such as the ‘moonwalking bear’, have also received significant attention.
Now, it’s hard to argue with the findings of the gorilla experiment itself. It’s a fact that most people who watch the clip miss the gorilla. But it does not necessarily follow that this illustrates – as both the study’s authors and Kahneman argue – that humans are ‘blind to the obvious’. A completely different interpretation of the gorilla experiment is possible.
Imagine you were asked to watch the clip again, but this time without receiving any instructions. After watching the clip, imagine you were then asked to report what you observed. You might report that you saw two teams passing a basketball. You are very likely to have observed the gorilla. But having noticed these things, you are unlikely to have simultaneously recorded any number of other things. The clip features a large number of other obvious things that one could potentially pay attention to and report: the total number of basketball passes, the overall gender or racial composition of the individuals passing the ball, the number of steps taken by the participants. If you are looking for them, many other things are also obvious in the clip: the hair colour of the participants, their attire, their emotions, the colour of the carpet (beige), the ‘S’ letters spray-painted in the background, and so forth.
In short, the list of obvious things in the gorilla clip is extremely long. And that’s the problem: we might call it the fallacy of obviousness. There’s a fallacy of obviousness because all kinds of things are readily evident in the clip. But missing any one of these things isn’t a basis for saying that humans are blind. The experiment is set up in such a way that people miss the gorilla because they are distracted by counting basketball passes. Preoccupied with the task of counting, missing the gorilla is hardly surprising. In retrospect, the gorilla is prominent and obvious.
But the very notion of visual prominence or obviousness is extremely tricky to define scientifically, as one needs to consider relevance or, to put differently, obviousness to whom and for what purpose?
To better understand the fallacy of obviousness, and how even such celebrated scientists as Kahneman struggle with it, some additional information is needed. Kahneman’s focus on obviousness comes directly from his background and scientific training in an area called psychophysics. Psychophysics focuses largely on how environmental stimuli map on to the mind, specifically based on the actual characteristics of stimuli, rather than the characteristics or nature of the mind. From the perspective of psychophysics, obviousness – or as it is called in the literature, ‘salience’ – derives from the inherent nature or characteristics of the environmental stimuli themselves: such as their size, contrast, movement, colour or surprisingness. In his Nobel Prize lecture in 2002, Kahneman calls these ‘natural assessments’. And from this perspective, yes, the gorilla indeed should be obvious to anyone watching the clip. But from that perspective, any number of other things in the clip – as discussed above – should then also be obvious.
So if the gorilla experiment doesn’t illustrate that humans are blind to the obvious, then what exactly does it illustrate? What’s an alternative interpretation, and what does it tell us about perception, cognition and the human mind?
The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’ This isn’t an argument against facts or for ‘alternative facts’, or anything of the sort. It’s an argument about what qualifies as obvious, why and how. See, obviousness depends on what is deemed to be relevant for a particular question or task at hand. Rather than passively accounting for or recording everything directly in front of us, humans – and other organisms for that matter – instead actively look for things. The implication (contrary to psychophysics) is that mind-to-world processes drive perception rather than world-to-mind processes. The gorilla experiment itself can be reinterpreted to support this view of perception, showing that what we see depends on our expectations and questions – what we are looking for, what question we are trying to answer.
At first glance that might seem like a rather mundane interpretation, particularly when compared with the startling claim that humans are ‘blind to the obvious’. But it’s more radical than it might seem. This interpretation of the gorilla experiment puts humans centre-stage in perception, rather than relegating them to passively recording their surroundings and environments. It says that what we see is not so much a function of what is directly in front of us (Kahneman’s natural assessments), or what one is in camera-like fashion recording or passively looking at, but rather determined by what we have in our minds, for example, by the questions we have in mind. People miss the gorilla not because they are blind, but because they were prompted – in this case, by the scientists themselves – to pay attention to something else. The question – ‘How many basketball passes’ (just like any question: ‘Where are my keys?’) – primes us to see certain aspects of a visual scene, at the expense of any number of other things.
The biologist Jakob von Uexküll (1864-1944) argued that all species, humans included, have a unique ‘Suchbild’ – German for a seek- or search-image – of what they are looking for. In the case of humans, this search-image includes the questions, expectations, problems, hunches or theories that we have in mind, which in turn structure and direct our awareness and attention. The important point is that humans do not observe scenes passively or neutrally. In 1966, the philosopher Karl Popper conducted an informal experiment to make this point. During a lecture at the University of Oxford, he turned to his audience and said: ‘My experiment consists of asking you to observe, here and now. I hope you are all cooperating and observing! However, I feel that at least some of you, instead of observing, will feel a strong urge to ask: “What do you want me to observe?”’ Then Popper delivered his insight about observation: ‘For what I am trying to illustrate is that, in order to observe, we must have in mind a definite question, which we might be able to decide by observation.’
In other words, there is no neutral observation. The world doesn’t tell us what is relevant. Instead, it responds to questions. When looking and observing, we are usually directed toward something, toward answering specific questions or satisfying some curiosities or problems. ‘All observation must be for or against a point of view,’ is how Charles Darwin put it in 1861. Similarly, the art historian Ernst Gombrich in 1956 emphasised the role of the ‘beholder’s share’ in observation and perception.
What we assume about human nature will determine the type of science we do, and shape questions we ask
Quite simply, this human-centric and question-driven view of perception cannot be reconciled with Kahneman’s presumptions concerning obviousness, which make it a function of the actual characteristics of the things in front of us (their size, contrast or colour) without acknowledging the Suchbild, the cognitive orientation or nature of the perceiver. Kahneman is right that nature plays a role, but it is the nature of the person or organism doing the perceiving, not the natural or inherent qualities of the object or thing seen. That is a radical shift. The overwhelming amount of ‘stuff’ directly in front of us forbids any kind of comprehensive or objective recording of what is in our visual field. The problem, as Sherlock Holmes put it, ‘lay in the fact of there being too much evidence. What was vital was overlaid and hidden by what was irrelevant.’ So, given the problem of too much evidence – again, think of all the things that are evident in the gorilla clip – humans try to hone in on what might be relevant for answering particular questions. We attend to what might be meaningful and useful.
If we make blindness or bias the key characteristic of human nature, we’ll never get to these deeper insights. That said, it is worth recognising that Kahneman’s focus on human blindness and bias – building on Herbert Simon’s 1950s work on bounded rationality – can be seen as an improvement to economic models that ascribe almost god-like perceptual abilities to economic actors. This assumption is built into the efficient market hypothesis, which – as argued by the economist James Buchanan in 1959 – assumes ‘omniscience in the observer’. These represent two extremes and caricatures of human cognition and judgment. At the one extreme we have an economics that presumes perceptual omniscience, where everything obvious (and relevant) is priced. As the quip goes: there are no $500 bills on sidewalks, because all-seeing economic agents quickly snap them up. At the other extreme we have behavioural economics, which focuses on human bias and blindness by pointing out biases or obvious things that humans miss.
But behavioural approaches put scientists themselves onto a god-like perch from which they can point out human failure. A third option – as discussed above – focuses on the role of human ingenuity in crafting questions, expectations, hypotheses and theories to make sense of their environments. It’s this approach that gives humanity its due, rather than succumbing to caricatured – whether omniscient or blind – views of cognition and human nature. And this approach also links to an approach suggested by Adam Smith when in 1759 he argued that ‘in the great chessboard of human society, every single piece has a principle of motion of its own’. Or as Emma Rothschild summarised in her book Economic Sentiments (2001), what Smith was after was a ‘theory of people with theories’. This can readily be contrasted with behavioural theories that focus on widespread human folly, rather than allowing individuals to possess any meaningful form of rationality or intelligence.
In fact, the current focus on human blindness and bias – across psychology, economics and the cognitive sciences – has contributed to the present orthodoxy that sees computers and AI as superior to human judgment. As Kahneman argued in a presentation last year: ‘It’s very difficult to imagine that with sufficient data there will remain things that only humans can do.’ He even offered a prescription that we ‘should replace humans by algorithms whenever possible’. Because humans are blind and biased, and can’t separate the noise from the signal, human decision-making should increasingly be left to computers and decision algorithms. This ethos has helped to fuel great excitement about AI and its attendant tools such as neural networks or deep learning. Proponents even claim that big data and associated computational methods will change the nature of science. For example, in 2008 Chris Anderson, then editor of Wired, boldly proclaimed ‘the end of theory’, as the ‘data deluge makes the scientific method obsolete’. Naturally, what we assume about human nature will determine the type of science we do, as well as shaping the types of questions we ask of it.
Knowing what to observe, what data to gather in the first place, is not a computational task – it’s a human one
However, computers and algorithms – even the most sophisticated ones – cannot address the fallacy of obviousness. Put differently, they can never know what might be relevant. Some of the early proponents of AI recognised this limitation (for example, the computer scientists John McCarthy and Patrick Hayes in their 1969 paper, which discusses ‘representation’ and the frame problem). But this problem has been forgotten amid the present euphoria with large-scale information- and data-processing. Simple examples such as what happens when Google’s self-driving car at an intersection encounters a cyclist in a ‘track stand’ (where the fixed-gear cyclist pedals to stay still) highlight the point. Computers do only what they are programmed to do, and don’t have imaginative and creative capacities that allow for improvisation. Of course, algorithms can be incrementally modified to deal with novel and new situations. But this requires human input, and is scarcely autonomous.
Knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task – it’s a human one. The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception. The scientific method illustrates this well. And so does the history of science. After all, many of the most significant scientific discoveries resulted not from reams of data or large amounts of computational power, but from a question or theory.
To illustrate, consider Isaac Newton. His observation of an apple falling – as he told the story to his friend William Stukeley – was extremely mundane and simple. Any number of apples, and other objects for that matter, had undoubtedly been observed to have fallen before Newton’s observation. But it was only with Newton’s question and theory that this mundane observation took on new relevance and meaning. Science also highlights that even visible or physical obviousness is scarcely straightforward. Newton showed how ‘white’ sunlight in fact was deceiving, and constituted by something else. Instead of relying on big data or computers, Newton performed a so-called experimentum crucis – a single observation with a prism, rather than crunching millions of data points – motivated by a hunch, hypothesis and theory. Following this method, he showed that a spectrum of colours constituted white light. The theory preceded the data and the observation, not the other way around. Similarly, theories of a heliocentric Universe caused the ‘obvious’ observations of the Sun circling the Earth – or the retrograde loops of planets (as observed from Earth) – to take on completely new meaning and relevance.
In short, as Albert Einstein put it in 1926: ‘Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed.’ The same applies whether we are talking about chest-thumping gorillas or efforts to probe the very nature of reality.
How we interpret the gorilla experiment might be seen as a kind of Rorschach test. How you interpret the finding depends on what you are looking for. On the one hand, the test could indeed be said to prove blindness. But on the other, it shows that humans attend to visual scenes in directed fashion, based on the questions and theories they have in mind (or that they’ve been primed with). How we interpret the experiment is scarcely a trivial matter. The worry is that the growing preoccupation of many behavioural scientists – across psychology, economics and the cognitive sciences – with blindness and bias causes scientists to look for evidence of human blindness and bias. Highlighting bias and blindness is certainly catchy and fun. And the argument that humans are blind to the obvious is admittedly far more memorable than an interpretation that simply says that humans respond to questions. But scientists’ own preoccupation with blindness risks driving the type of experiments scientists construct, and what they then observe, look for, focus on, and say. And looking for validations of blindness and bias, they are sure to find them.
The construction and manufacture of scientific experiments such as the gorilla study present a problem. Just as Kahneman, in an open letter published by the journal Nature in 2012, called into question some of the findings in the literature on psychological priming, so there are similar questions about the interpretation and setup of studies that point to human blindness and bias. But, while Kahneman calls for large-scale replications of priming studies, the argument here is not that we need more studies or data to verify that people indeed miss blatantly obvious gorillas. Instead, we need better interpretation and better theories. After all, additional studies and data would undoubtedly verify the gorilla finding. But the more important issue is the interpretation of the gorilla finding and the experimental construction of the finding.
Having a ‘blind to the obvious’-baseline assumption about human nature biases the types of experiments that are crafted by scientists in the first place, what scientists go on to observe and look for, and how they interpret what they find. And importantly, the assumption of human blindness or bias makes scientists themselves blind to the other, more positive aspects of human cognition and nature. Thus the problem is more upstream, in the set of a priori questions and theories that science has. Namely, if our theory focuses on some aspect of human blindness and bias, and if we construct lab experiments to prove it (or look for naturally occurring instances of it), then yes, we are likely to find evidence. But surely there are more fundamental characteristics for the human mind than just blindness and bias. And the hunt for bias and blindness has also led to ridiculous conclusions, such as the claim in 2013 by the psychologist Keith Stanovich that ‘humans are often less rational than bees’ and other animals. It’s this type of logic that has fuelled popular books such as Nudge (2008) by Richard Thaler and Cass Sunstein or The Rationality Quotient (2016) by Richard West, Maggie Toplak and Stanovich, which try to prescribe how individuals might be less blind and biased.
Of course, efforts to correct behaviour and nudge error-prone decision-making toward better outcomes are laudable. But in many cases, it’s hard to trust the prescriptions that some bias-focused theories actually make about what ought to be the correct or rational behaviour. Consider a simple example. The ultimatum game is a very common task played in experimental labs in economics and psychology. In the game, people are paired up and one of the paired individuals is given some amount of money, say $10. The person receiving the money needs to figure out what increment of that $10 she should give to the person she is paired with. If the other person accepts the money, then both get to keep the money – if he rejects it, neither of them gets to keep the money.
The fascination with blindness and bias flattens humans, and science, to a morally dubious game of ‘gotcha’
Some economists say that the rational or correct amount to give to the other person is the minimum increment, perhaps just one cent. The rational or correct action by the receiver is to accept any amount, even one cent. Otherwise you are irrational. After all, one cent is better than nothing. Of course, in real-life environments, across many cultural contexts, people deviate wildly from economists’ ideas about rational behaviour – giving far larger increments and refusing small increments. Some economists will call this irrational and biased behaviour. But a more reasonable interpretation is that individuals are behaving based on norms appropriate to the cultural contexts, situations and people with whom they find themselves interacting. As with the gorilla experiment, the same data and finding lends itself to conflicting arguments, either for bias or for a more human-centric situational logic.
Any well-intentioned efforts to correct human blindness also need to recognise that making these corrections comes with costs or tradeoffs. The trivial point here is that if we want to correct for the blindness of not spotting something (such as the gorilla), then this correction comes at the cost of attending to any number of other obvious things (eg, number of basketball passes). But the far more important point is that we also need to recognise and investigate the remarkable human capacities for generating questions and theories that direct our awareness and observations in the first place. Bias and blindness-obsessed studies will never get us to this vital recognition. In other words, continuing to construct experiments that look for and show bias and blindness – and adding them to the very large and growing list of cognitive biases and forms of blindness – will always leave out the remarkable capacities of humans to generate questions and theories. At its worst, the fascination with blindness and bias flattens humans, and science, to a morally dubious game of ‘gotcha’.
To illustrate just how far we’ve come in ridiculing human capacities, in 2017 Kahneman concluded his aforementioned presentation to academics by arguing that computers or robots are better than humans on three essential dimensions: they are better at statistical reasoning and less enamoured with stories; they have higher emotional intelligence; and they exhibit far more wisdom than humans. Those are radical claims. Now, there’s no doubt that developments in computing and AI – for example, machine and deep learning, neural networks, big-data analytics – are exciting and promising. The prospect of such things as autonomous vehicles and increasingly automated work processes are exciting and will likely free up human capacities for other, more creative tasks. And the prospects of AI appear daunting as well, leading to concerns about large-scale joblessness or fears about superintelligent computers obliterating humanity (see Nick Bostrom’s 2014 book on the subject). So there is reason for both optimism and worry.
But we ought to be concerned that most of these conceptions of AI build on an extremely limited view of what human rationality, judgment and reasoning are in the first place. These approaches presume that computation exhausts or fully captures the human mind; for example, see Ray Kurzweil’s book How to Create a Mind (2012). They presume that intelligence and rationality is largely about statistical probabilities and mathematical calculation. Of course, if we compare humans and computers on their ability to compute, then there is no question that computers outperform humans. But intelligence and rationality are more than just calculation or computation, and have more to do with the human ability to attend to and identify what is most relevant.
Deciding what is relevant and meaningful, and what is not, are vital to intelligence and rationality. And relevance and meaning continue to be outside the realm of AI (as illustrated by the so-called frame problem). Computers can be programmed to recognise and attend to certain features of the world – which need to be clearly specified and programmed a priori. But they cannot be programmed to make new observations, to ask novel questions or to meaningfully adjust to changing circumstances. The human ability to ask new questions, to generate hypotheses, and to identify and find novelty is unique and not programmable. No statistical procedure allows one to somehow see a mundane, taken-for-granted observation in a radically different and new way. That’s where humans come in.
We will undoubtedly continue to interact with computers in novel and powerful ways. However, the ongoing claims that AI will surpass human intelligence – or that the mind is simply a computer – buy into the type of AI euphoria that has been widespread in academia since the 1950s. Many of the obvious problems with this optimism and the mind-computer metaphor were discussed by Hubert Dreyfus many years ago in his book What Computers Can’t Do (1972). And the problems and limits of computation were recognised far earlier, for example by the Victorian-era computing pioneer Ada Lovelace (see the Aeon essay by David Deutsch).
In all, my central concern is that the current obsession with human blindness and bias – endemic to behavioural economics and much of the cognitive, psychological and computational sciences – has caused scientists themselves to be blind to the more generative and creative aspects of human nature and the mind. Yes, humans do indeed miss many ‘obvious’ things, appearing to be blind, as Kahneman and others argue. But not everything that is obvious is relevant and meaningful. Thus human blindness could be seen as a feature, not a bug.
Humans do a remarkable job of generating questions, expectations, hypotheses and theories that direct their awareness and attention toward what is relevant, useful and novel. And it is these – generative and creative – qualities of the human mind that deserve further attention. After all, these and related aspects of mind are surely responsible for the significant creativity, technological advances, innovation and large-scale flourishing that we readily observe around us. Of course, humans continue to make mistakes, and any number of small- and large-scale problems and pathologies persist throughout the world. But understanding the more generative and creative capacities of the human mind deserves careful attention, as insights from this work can in turn help to solve additional problems, and lead to further technological advances and progress.