Menu
Aeon
DonateNewsletter
SIGN IN

Detail from Self Portrait (1500) by Albrecht Dürer. The text to the right broadly translates as ‘Thus I, Albrecht Dürer from Nuremberg, created myself with characteristic colours at the age of 28 years.’ Courtesy Wikipedia/Alte Pinakothek München

i

Consciousness is real

Consciousness is neither a spooky mystery nor an illusory belief. It’s a valid and causally efficacious biological reality

by Massimo Pigliucci + BIO

Detail from Self Portrait (1500) by Albrecht Dürer. The text to the right broadly translates as ‘Thus I, Albrecht Dürer from Nuremberg, created myself with characteristic colours at the age of 28 years.’ Courtesy Wikipedia/Alte Pinakothek München

These days it is highly fashionable to label consciousness an ‘illusion’. This in turn fosters the impression, especially among the general public, that the way we normally think of our mental life has been shown by science to be drastically mistaken. While this is true in a very specific and technical sense, consciousness remains arguably the most distinctive evolved feature of humanity, enabling us not only to experience the world, like other animal species do, but to deliberately reflect on our experiences and to change the course of our lives accordingly.

A lot of the confusion, as we shall see, hinges on what exactly we mean by both ‘consciousness’ and ‘illusion’. In order to usefully fix our ideas instead of meandering across a huge literature in philosophy of mind and cognitive science, consider a fascinating essay for Aeon by Keith Frankish. He begins by making a distinction between phenomenal consciousness and access consciousness. Phenomenal consciousness is what produces the subjective quality of experience, what philosophers call ‘qualia’. This is what makes it possible for us (and, presumably, for a number of other animal species) to experience what it is like, for example, to see red, or taste a persimmon, or write essays on philosophy of mind.

By contrast, access consciousness makes it possible for us to perceive things in the first place. As Frankish puts it, access consciousness is what ‘makes sensory information accessible to the rest of the mind, and thus to “you” – the person constituted by these embodied mental systems’. Before you can experience what it is like to see red, you have to be able to actually see red. Frankish agrees that access consciousness is a real thing, not an illusion, though he correctly adds that we are still very early on in our quest to understand it scientifically. Perhaps the best-known aspect of access consciousness is the visual system, the part of the central nervous system that makes it possible for us to see the world. We know quite a bit about the anatomical, physiological and neurobiological aspects of this system, and it is reasonable to presume that other constituents of access consciousness work in a similar way, and that science can, at least in principle, understand them by similar experimental and observational approaches.

But, argues Frankish, a number of philosophers would say that, even if we had a complete description of access consciousness, there would still be something fundamentally amiss from our picture of consciousness as a whole. That part – phenomenal consciousness – is what underpins what it is like to feel something or, as Thomas Nagel put it in his classic paper, ‘What Is It Like to Be a Bat?’ (1974).

Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).

By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)

The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.

Illusionism was labelled ‘the silliest claim ever made’ by Galen Strawson in The New York Review of Books last year, but is defended by other prominent philosophers, particularly by Daniel Dennett. Indeed, Dennett arguably is the one who started this trend back in the early 1990s, with the publication of his influential book Consciousness Explained (1991) – which, though certainly interesting, did not, in fact, explain consciousness. Indeed, my preference among Dennettian books on that topic goes by far to Elbow Room: The Varieties of Free Will Worth Wanting (1984).

While I am tempted to sympathise with Strawson here, I think that Dennett is closer to the mark. To see why, let’s consider his renowned analogy, what he calls an ‘intuition pump’ about phenomenal consciousness, as presented in Consciousness Explained. Dennett suggests that phenomenal consciousness is a ‘user illusion’ akin to the icons we’re used to seeing on our desktop and laptop computer screens (and on tablets and smart phones). Here is how he puts it:

When I interact with the computer, I have limited access to the events occurring within it. Thanks to the schemes of presentation devised by the programmers, I am treated to an elaborate audiovisual metaphor, an interactive drama acted out on the stage of keyboard, mouse, and screen. I, the User, am subjected to a series of benign illusions: I seem to be able to move the cursor (a powerful and visible servant) to the very place in the computer where I keep my file, and once that I see that the cursor has arrived ‘there’, by pressing a key I get it to retrieve the file, spreading it out on a long scroll that unrolls in front of a window (the screen) at my command. I can make all sorts of things happen inside the computer by typing in various commands, pressing various buttons, and I don’t have to know the details; I maintain control by relying on my understanding of the detailed audiovisual metaphors provided by the User illusion.

This is actually a very powerful (metaphorical) description of the relationship between phenomenal consciousness and the underlying neural machinery that makes it possible. But why on earth would we call it an ‘illusion’? The term brings to mind trickery, smokes and mirrors. Which is most definitely not what is going on. Computer icons, cursors and so forth are not illusions, they are causally efficacious representations of underlying machine-language processes. It would be too tedious for most users to think in terms of machine-language, and too slow to interact with the computer by that means. That’s why programmers gave us icons and cursors. But these are causally connected with the underlying machine code, which is why we can actually make things happen in a computer. If they were illusions, nothing would happen – they would be causally inert epiphenomena.

We don’t need access to our neural mechanisms, just like a computer user doesn’t need to know machine-language

Or take a more mundane example. Would you call the wheel of your car an illusion? And yet, when you turn it this way or that, you are definitely not aware of the (increasingly) complex servo- and electronic mechanisms that translate your simple movements into your car actually turning this way or that. When you turn the steering wheel in a circular fashion, your car’s wheels don’t turn the same way, they shift right or left on a horizontal plane (which is why you can have cars that have levers moving right or left, instead of rotating steering wheels). The steering wheel, then, is in a sense a representation of what the car will do if you act on it this way or that, and it works because it is causally connected to the underlying machinery in a way that makes it possible for you to efficiently operate such machinery without being aware of it.

Similarly with phenomenal consciousness. The ‘what is it like’ feelings and thoughts that we have are high-level representations of the (entirely different in nature) underlying neural mechanisms that make it possible for us to perceive, react to, and navigate the world. Instead of more or less clever programmers, we have to thank billions of years of evolution by entirely mindless natural selection for these causally efficacious representations. To call them illusions is to derail our thinking along unproductive tracks, leading us – if we are not careful – to metaphysical and scientific claims that are just as problematic as those of Chalmers & co, and that Strawson is not entirely off in calling ‘silly’.

It is certainly true, as the illusionists maintain, that we do not have access to our own neural mechanisms. But we don’t need to, just like a computer user doesn’t need to know machine-language – and, in fact, is far better off for that. This does not at all imply that we are somehow mistaken about our thoughts and feelings. No more than I as a computer user might be mistaken about which ‘folder’ contains the ‘file’ on which I have been ‘writing’ this essay.

This illusion talk can be triggered by what I think of as the reductionist temptation, the notion that lower levels of description – in this case, the neurobiological one – are somehow more true, or even the only true ones. The fallaciousness of this kind of thinking can be brought to light in a couple of ways. First of all, and most obviously, why stop at the neurobiological level? Why not say that neurons are themselves illusions, since they are actually made of molecules? But wait! Molecules too are illusions, as they are really made of quarks. Or strings. Or fields. Or whatever the latest from fundamental physics says.

That way of thinking is, in fact, appealing to some greedy reductionists, but it truly is silly for the simple reason that it is unworkable. And it is unworkable because, when it comes to human understanding, different levels of description are useful for different purposes. If we are interested in the biochemistry of the brain, then the proper level of description is the subcellular one, taking lower levels (eg, the quantum one) as background conditions. If we want a broader picture of how the brain works, we need to move up to the anatomical level, which takes all previous levels, from the subcellular to the quantum one, as background conditions. But if we want to talk to other human beings about how we feel and what we are experiencing, then it is the psychological level of description (the equivalent of Dennett’s icons and cursors) that, far from being illusory, is the most valuable. Which is why Paul and Patricia Churchland’s old proposal – that we should replace ‘folk psychology’ talk about, say, pain, with more ‘scientific’ talk of the firing of C-fibres (part of the neural substrate that makes feeling pain possible) – truly was silly. It’s just not going to happen, no more than all of us end-users of computers will suddenly learn machine-language, and forgo cursors and icons.

When illusionists argue that what we experience as qualia are ‘nothing like’ our actual internal mental mechanisms, they are, in a sense, right. But they also seem to forget that everything we perceive about the outside world is a representation and not the thing-in-itself. Take the visual system, which as I mentioned above is one of the best-understood instances of access consciousness, and which makes phenomenal consciousness possible. Our eyes in reality perceive a very narrow band of the electromagnetic spectrum, determined by the specific environment in which we have evolved as social primates, as well as by the type of radiation that comes from the Sun and passes through the filters of Earth’s atmosphere. There is, in other words, a hell of a lot that we don’t see. At all.

Think of consciousness as a weakly emergent phenomenon, not dissimilar from the wetness of water

Even the fact that we see the world the right way up instead of upside down is a trick (an ‘illusion’?) of the brain, since the optics of our eyes are such that outside objects generate an inverted set of signals hitting our retina. It is the brain that re-interprets the corresponding electrical impulses so that we see the world correctly (see here). Some (but not all) people can experience how bizarre this is by using upside-down goggles. These goggles invert the image coming from the outside before the signals stimulate the retina, thus showing subjects what the world would look like if their brain didn’t compensate for the inversion. In some individuals, the brain adapts quickly and reinverts the signal pattern, so that the world ends up looking ‘normal’ again. Until, that is, the subjects take off their goggles, at which point some of them see the world upside down until their brain compensates again. Why on earth do things work that way? Because the human eye evolved to exploit basic principles of optics, but the brain improves on them since it turns out that it is easier for human beings to navigate the world if they see it the right side up, rather than inverted.

Following John Searle, I think that consciousness is an evolved biological mechanism with adaptive value, and that treating it as an illusion is, in a big sense, denying the data that need to be explained. In his book The Rediscovery of the Mind (1992), Searle writes:

What I want to insist on, ceaselessly, is that one can accept the obvious facts of physics – for example, that the world is made up entirely of physical particles in fields of force – without at the same time denying the obvious facts about our own experiences – for example, that we are all conscious and that our conscious states have quite specific irreducible phenomenological properties.

‘Irreducibility’ here is not a mystical concept, and it can be cashed out in a number of ways. I’m not sure which way Searle himself leans, but I think of consciousness as a weakly emergent phenomenon, not dissimilar from, say, the wetness of water (though a lot more complicated). Individual molecules of water have a number of physical-chemical properties, but wetness isn’t one of them. They acquire that property only under specific environmental circumstances (in terms of ambient temperature and pressure) and only when there is a sufficiently large number of them. Crucially, the properties of water depend not just on the number and arrangement of molecules, but also on how the molecules themselves are constituted. If they had a different number of neutrons or electrons within their atoms, or a different number of atoms, they would have different properties.

Similarly with all mental phenomena, including both access and phenomenal consciousness. Even though there is nothing spooky about it (bye-bye to any form of dualism), specific numbers and arrangements of neurons seem not to be sufficient to generate those phenomena. The involved neurons also need to be made of (and produce) the right stuff: it is not just how they are arranged in the brain that does the trick, it also takes certain specific physical and chemical properties that carbon-based cells have, silicon-based alternatives might or might not have (it’s an open empirical question), and cardboard, say, definitely doesn’t have.

It follows that an explanation of phenomenal consciousness will come (if it will come – there is no assurance that, just because we want to know something, we will eventually figure out a way of actually knowing it) from neuroscience and evolutionary biology, once our understanding of the human brain will be comparable with our understanding of the inner workings of our own computers. We will then see clearly the connection between the underlying mechanisms and the user-friendly, causally efficacious representations (not illusions!) that allow us to efficiently work with computers and to survive and reproduce in our world as biological organisms.