Menu
Aeon
DonateNewsletter
SIGN IN

Caenorhabditis elegans, a small-brained worm, challenges big-brained scientists’ ideas about how cognition works. Photo by Adam Klosin/CRG

i

On the origin of minds

Cognition did not appear out of nowhere in ‘higher’ animals but goes back millions, perhaps billions, of years

by Pamela Lyon + BIO

Caenorhabditis elegans, a small-brained worm, challenges big-brained scientists’ ideas about how cognition works. Photo by Adam Klosin/CRG

In On the Origin of Species (1859), Charles Darwin draws a picture of the long sweep of evolution, from the beginning of life, playing out along two fundamental axes: physical and mental. Body and mind. All living beings, not just some, evolve by natural selection in both ‘corporeal and mental endowments’, he writes. When psychology has accepted this view of nature, Darwin predicts, the science of mind ‘will be based on a new foundation’, the necessarily gradual evolutionary development ‘of each mental power and capacity’.

Darwin guessed that life arose from a single ancestral ‘form’, presumed to be single-celled. Soon, scientists in Germany, France and the United States began investigating microscopic organisms for evidence of ‘mental faculties’ (perception, memory, decision-making, learning). All three groups were headed by men destined for eminence. Two of them – Alfred Binet, the psychologist who devised the first practical intelligence test, and Herbert Spencer Jennings, who laid the foundations of mathematical genetics – were persuaded by their research that Darwin was right: behaviour even in microbes suggests that mental as well as physical evolution exists. Max Verworn, a giant of German physiology, was unconvinced.

Thus kicked off a heated debate about the continuity of mental evolution, the idea that what in humans is called ‘mind’ and in other animals is usually called ‘cognition’ developed and changed over millions of years (billions, actually). That debate continues to this day. The rise of behaviourism in the early 1900s, which privileged observable behaviour as the only admissible scientific data, curtailed discussion by taking talk about minds off the table for decades. When the ‘cognitive revolution’ launched mid-century, discontinuity was firmly established. The consensus was that, at some point in evolution (and we might never know when), cognition – poof! – somehow appears in some animals. Before that, behaviour – the only indicator of cognition available without language – would have been entirely innate, machine-like, reflexive. It might have looked cognitively driven but wasn’t. This remains the dominant view, almost entirely on the grounds of its ‘intuitive’ plausibility based on commonsense understanding.

The philosopher Daniel Dennett, among the earliest cognitive philosophers to invoke evolution, dubbed natural selection ‘Darwin’s dangerous idea’ because it showed that the appearance of design in nature requires no designer, divine or otherwise. Like most of his colleagues, philosophical and scientific, Dennett didn’t buy the continuity of mental evolution. However, my view is that this neglected insight of Darwin’s was his most radical idea, one with the potential to induce a full-blown Copernican revolution in the cognitive sciences and to transform the way we see the world and our place in it.

The Copernican revolution turned on a single shift in perspective. For 1,400 years, European scholars agreed with ordinary folk that Earth is the still point around which the heavens revolve. The Ptolemaic model of the cosmos had set the Sun, Moon, stars and planets moving in nested crystalline spheres around Earth. In 1543 Nicolaus Copernicus published a detailed alternative that replaced Earth with the Sun. By setting it in motion around the Sun with other celestial ‘wanderers’, our planet was dethroned as the cosmic centre, and modern astronomy was born.

Similarly, Darwin’s radical idea dethrones human and other brains from their ‘intuitively obvious’ position at the centre of the (Western) cognitive universe. In their place, Darwin sets an evolving, cognising being struggling to survive, thrive and reproduce in predictably and unpredictably changing conditions. This single shift of perspective – from a brain-centred focus, where Homo sapiens is the benchmark, to the facts of biology and ecology – has profound implications. The payoff is a more accurate and productive account of an inescapable natural phenomenon critical to understanding how we became – and what it means to be – human.

What is cognition? Like many mental concepts, the term has no consensus definition, a fact that infuriated William James 130 years ago and occasional others since. This is my definition: Cognition comprises the means by which organisms become familiar with, value, exploit and evade features of their surroundings in order to survive, thrive and reproduce. Here is how I came to it.

As a PhD student in Asian Studies 21 years ago, my research focused on four Buddhist propositions that I aimed to subject to forensic Western philosophical and scientific analysis. Implicit in these propositions is a highly sophisticated Buddhist view of mind: what it is, how it works, what it does under benighted conditions, what it can do with training and practice. I looked for a Western comparator in what was then called cognitive science (in the singular) and found… nothing.

Four cartons of books and a laptop full of articles later, I had a collection of loose, dissonant ideas, and related streams of argument, none of which provided purchase on the experience of having a mind or its role in behaviour. As the neurobiologist Steven Rose observed in 1998 (and nothing much has changed), neuroscience had generated oceans of data but no theory to make sense of them. At the dawn of the 21st century, this struck me as outrageous. It still does.

Cognitive science was then ruled by three (decades-old) reference frames that provided the foundation of the ‘cognitivist’ paradigm: 1) the human brain; 2) a belief that the brain is a computing machine; and 3) the idea that ‘cognition is computation over representations’. The latter tenet doggedly resists easy explanation because the central concepts introduced fresh ambiguities into the field (as if any more were needed). Roughly, it boils down to this: there are identifiable things in the brain that ‘stand in’ for aspects of the world, much how words do in sentences. These bits of information are ‘processed’ according to algorithms yet to be discovered. This is what we call thinking, planning, decision-making, and so on. No mental activity exists over and above the processing of representations; cognition just is this processing.

Evolution had laid a foundation of capacities considered cognitive well before nervous systems appeared

Biology and evolution, which I assumed must be of utmost importance, were largely absent; so were physiology, emotion and motivation. Researchers who believed the study of animal behaviour had something useful to offer cognitive science were just beginning to publish in the field and were not warmly welcomed. ‘Embodied’ and ‘situated’ cognition were gaining traction but were then more an acknowledgement of the bleeding obvious than a coherent framework. Without criteria for identification, attributions of biological cognition were all over the taxonomic shop.

I still needed a comparator, however. I decided to investigate whether biology held the answers I assumed it must. I opted to start at the rootstock of the tree of life – bacteria – to see if anything conceivably cognitive was going on; 20 years on, I am still mining this rich seam.

My first theoretical guides, recommended by an unconventional Australian biologist, were the zoologist Jakob von Uexküll and the neuroscientist Humberto Maturana. Von Uexküll’s Umwelt concept (popularised in English in 1934) made me appreciate the particularity of the world constructed by an organism’s unique complement of senses and the value the organism imputes to elements of that construction, evolved in dependence upon how the organism makes a living. Elevated carbon dioxide is an attractant for a mosquito seeking a blood meal, but induces transient breathlessness, dizziness and minor anxiety in many people (as the COVID-19 experience with masking has shown). Maturana’s Biology of Cognition (1970) made me realise just how weird the living state is compared with any other physical system on Earth.

As Maturana sees it, life is self-producing, not merely self-organising or self-maintaining. If an Airbus A380 performed the same feat, it could seek out, take in and transform into fuel sources of matter and energy from its surroundings and manufacture the components that enable it to function (taxi, take off, fly, maintain a stable inner environment, touch down) while in flight. And that’s saying nothing about reproduction. Maturana’s account of cognition focuses on the organism’s need to interact continually with its surroundings to accomplish this amazing feat. This ‘domain of interactions’ between organism and environment is cognition for Maturana, such that ‘living as a process is a process of cognition’ (author’s italics), a claim I have confirmed in bacteria to my satisfaction.

Cognitivism effectively requires cognition-as-the-domain-of-interactions to arise not earlier than the origin of brains, and very likely long afterward. In Maturana’s view (and he was a neuroscientist), the cognitive domain of interactions arose with single-celled life. Neural structures added complexity, in both cognitive organism and exploitable environment, but didn’t generate cognition as such. Evidence for Maturana’s view grows daily.

Basal cognition – the study of cognitive capacities in non-neural and simple neural organisms (to which my PhD research led) – is in its infancy as a field. However, evidence already shows that evolution had laid a solid foundation of capacities typically considered cognitive well before nervous systems appeared: about 500-650 million years ago. Perception, memory, valence, learning, decision-making, anticipation, communication – all once thought the preserve of humankind – are found in a wide variety of living things, including bacteria, unicellular eukaryotes, plants, fungi, non-neuronal animals, and animals with simple nervous systems and brains.

No amount of positive evidence for basal cognition will persuade a diehard neurocentric, however. (What do you mean by memory, valence, decision-making? Isn’t it a matter of definition?) Darwin’s radical idea must solve problems that cognitivism cannot. The Copernican model didn’t become a revolution until the Ptolemaic model confronted findings it couldn’t predict or explain, but the heliocentric model could. This required Tycho Brahe’s meticulous, comprehensive astronomical observations, Johannes Kepler’s laws of planetary motion, Galileo Galilei’s theorising based on optically improved observations, and Isaac Newton’s law of gravitation, which built on the previous work (‘the shoulders of giants’). It took time: 144 years.

Darwin’s thesis of the continuity of mental evolution is older than that but lies much closer to the bone of human identity. After all, ‘wise’ is in our Latin species designation (the ‘sapiens’ in H sapiens). Possession of an intelligent, rational mind is supposed to be humankind’s defining characteristic. Accepting a Sun-centred cosmos is as nothing compared with accepting a life-centred psychology. We might not have a choice, however.

Cognitive neuroscience currently faces several challenges that must be overcome to understand how brains and nervous systems work, a prerequisite to understanding how cognitive capacities are generated by such systems. Three are sketched below. What seems needed in all three cases are simpler model systems, from which it’s more likely that fundamental discoveries about the drivers of organisms’ interactions with their surroundings will be made. Such discoveries might lead to general principles that can be tested in more complex organisms.

The first challenge to neuroscience relates to the ‘functional unit’ of the brain or nervous system. For more than a century, the single nerve cell has served as the structural and functional unit of brain activity. Pioneers of cognitive science enlisted the neuron doctrine as the foundation of the brain’s putative computational capacities. Each neuron was conceived as an on-off switch presumed capable of acting as a logic gate, enabling information to be ‘digitised’ (turned into ones or zeros) and thereby ‘encoded’. Single neurons were assumed to perform complex encoding tasks, including for places, faces and locations in space; a Nobel Prize was awarded on this basis.

Microbes can illuminate cognitive mechanisms ordinarily associated with complex animals

Two findings have put pressure on this account. First is the number of different types of cells in the human brain. A recent study revealed no fewer than 75 different cell types: 24 excitatory types, 45 inhibitory types and six non-neuronal types. What they all do and how they interact are poorly understood. Second, it’s now clear that populations of neurons – acting in ensembles, networks and/or circuits – are the most likely units of functional activity. Defining a neural ensemble, network and/or circuit is non-trivial, however; so is understanding how they form and interact, how stable they are over time, how and whether they’re nested in hierarchies, and how they generate behaviour. All are still major works in progress.

Understanding how neural circuits form and function requires an organism behaving as a whole system. This is why the neuroscientist Rafael Yuste introduced Hydra vulgaris, a freshwater animal with the simplest known nervous system, as a model for studying neural circuits. The decision has paid off handsomely. Hydra’s entire nerve net has been imaged simultaneously – on video – as the microscopic animal stretches out its tentacles to capture food and contracts into a ball, enabling correlation of different neural circuits with different behaviours. Thanks to Hydra’s regenerative capacities, the step-by-step sequence from neuronal growth to whole-body integration of multiple neural networks has now been identified in animals regrown from separated cells. The ecological significance of Hydra behaviour is still poorly understood, however, and much work to understand what certain behaviour is for (for the organism) remains to be done. This should help further illuminate the dynamic (bio)logic of neural activity.

While the idea of shifting collectives of cells whose coordinated activity both requires, and might constitute, cognition doesn’t easily square with the cognitivist tenet of ‘computation over representations’, support is widespread in biology. Complex behaviours coordinated by thousands of interacting, autonomous cells are well studied in bacteria (eg, Bacillus subtilis, Myxococcus xanthus) and social amoeba (Dictyostelium discoideum). The discovery in B subtilis of colonies of long-distance electrical signalling via ion channels – the mechanism of electrical transmission in neurons – provided ‘proof of concept’ that microbes can illuminate cognitive mechanisms ordinarily associated with complex animals. This finding led to further discoveries of previously unknown collective bacterial behaviours that resemble some types of cognitive brain activity, including memory. Studies of bacterial behaviour mediated by electrical signalling are just beginning.

Network activity among bacterial signal transduction proteins was first described 25 years ago. Today, the network properties of large arrays of signalling proteins, common in bacteria that rely on whip-like flagella to navigate chemical gradients (chemotaxis), are an active area of research. Highly conserved over the course of evolution, this architecture has been compared only slightly tongue-in-cheek to a ‘nanobrain’, because it functions as a network, is capable of processing large amounts of information, is exquisitely sensitive to tiny changes in environmental conditions, and is positioned at the leading pole of the cell, akin to the cell’s ‘head’ but one that shifts position as the cell changes direction.

These arrays might be processing more information than imagined. Escherichia coli recently were found to reject the bacterial equivalent of junk food due to sluggish growth. Chemotaxis, movement toward or away from some states of affairs, is one of E coli’s most energetically costly behaviours; it should be puzzling that bacteria will leave available food (the proverbial bird in the hand) and continue foraging for better nutrition elsewhere – except that the strategy often works. Caenorhabditis elegans, a small-brained worm, does this, too. If it has fed on high-quality food in the past, the tiny worm will leave poor-quality food in anticipation of finding something better. This discovery was a stunner at the time because such behaviour was assumed to require a ‘higher-order’ decision-making capacity.

The second challenge to neuroscience arrived when a heroic scientific success disclosed a ‘surprising failure’. The wiring diagram of the C elegans brain, a project started in cognitivism’s heyday, was completed. Connections between the worm’s 302 neurons were mapped, and behaviours associated with most cell types defined. Yet this stunning achievement revealed little about how and why a worm behaves the way it does – the aim of the research. According to the neuroscientist Cori Bargmann, C elegans studies ‘suggest that it will not be possible to read a [neural] wiring diagram as if it were a set of instructions’ for behaviour. This is for two main reasons.

First, behaviour under more ecologically realistic conditions violated key assumptions about the causal relation between activity in particular neurons (eg, sensory, motor, integrative) and certain behaviours (forward and reverse locomotion, feeding) derived from genetic knock-out studies. On the contrary, a single behaviour might be induced by several different neural circuits. Moreover, one circuit’s starting point might result in different – even opposing – behaviours in different circumstances. In short, a single wiring diagram represented more potential behaviour than originally assumed.

The default mode network is not the only spontaneous oscillation in the brain

Second, context and the organism’s internal state proved much more important to behaviour than initially thought. Context and internal state are believed to be signalled by molecules – neuromodulators and their smaller cousins, neuropeptides – although precisely how is unclear. These signalling molecules, many of which are produced by neurons themselves, can alter neural function from seconds to minutes to hours; interact with different targets (other neurons, muscle cells, glands); and activate or silence entire circuits. C elegans produces more than 100 such molecules.

In bacteria and unicellular eukaryotes (cells with a defined nucleus, which bacteria lack), coordinated activity involving thousands of individuals – the equivalent of multicellular behaviour – is also facilitated by signalling molecules, a phenomenon called quorum sensing. Quorum sensing molecules have been compared to hormones because they alter behaviour by similar mechanisms. As hormones do in animals and plants – and the activity of neuromodulators and neuropeptides is not dissimilar – signalling molecules produced by microbial cells induce changes in behaviour in four ways: 1) in the producing cell; 2) in an immediate neighbour via cell-cell contact; 3) within cell neighbourhoods; and 4) in cells at longer distances. Many unicellular signalling molecules exist but far fewer than in multicellular organisms.

The third challenge to neuroscience wasn’t a paradigm misfire; it came from left-field. Cognitive processes were traditionally conceived as entirely reactive: something in the world affects the organism (input), resulting in a response (output). The input-output view is basic to cognitivism. Discovery in the late 1990s of spontaneous, ongoing brain activity without an external stimulus thus was initially thought to be an artefact of imaging technology, then deeply puzzling, and now is a major research field. The default mode network is defined as functionally connected brain regions that are active during wakeful rest and inactive during task-oriented behaviour. Chimpanzees and mice exhibit default mode activity. In humans, default network disturbance is associated with psychiatric disorders. This means it’s important to cognitive functioning.

The default mode network is not the only spontaneous oscillation in the brain, far from it. The neuroscientist György Buzsáki, who has worked hard to draw attention to the ‘rhythms of the brain’, claims that this kind of brain activity is not system noise but ‘is actually the source of our cognitive abilities’ and might be the brain’s ‘fundamental organiser of neuronal information’. Spontaneous low-frequency oscillations have been detected in Hydra but also in a wide variety of organisms, including plants, single-celled eukaryotes and bacteria, as well as in diverse animals. If such oscillations are central organisers of living activity – hypothesised by Alison Hanson, a medical resident in Yuste’s lab – clearly neurons aren’t necessary to generate them.

Oscillations are produced by ion channels in cellular membranes found across the tree of life. Michael Levin’s lab has shown that ion channel-generated bioelectricity plays a key role in ‘pattern memory’ for regenerating animal bodies. Headless planaria regenerate brains, tadpoles regrow tails, adult frogs regrow functional hindlimbs (if induced), and electrical stimulation can make things grow where they shouldn’t – for example, a second worm head where a tail should be. For Levin, pragmatic acceptance that even cells in tissues inherit some of the decision-making capacities of their unicellular forebears – what he calls ‘the cognitive lens’ – could transform fields as different as developmental biology, immunology, neuroscience, bioengineering and artificial intelligence.

What is needed is a shift in perspective. In The Brain from Inside Out (2019), Buzsáki argues that many of the seemingly intractable problems that neuroscience faces arise entirely from ‘human-constructed ideas’ about how the mind/brain must work, based on philosophical and scientific conjecture over millennia, which are then shoe-horned on to observed brain activity. This is what he calls the ‘outside-in’ perspective: ‘the dominant framework of mainstream neuroscience, which suggests that the brain’s task is to perceive and represent the world, process information, and decide how to respond … in an “outside-in” manner’. This is what Maturana calls ‘observer dependence’, from the observer’s point of view, not the observed system’s. The spontaneously active brain has its own logic, of which almost nothing is understood. Deciphering this logic from the perspective of the system generating the activity – from ‘inside-out’ – should be the primary goal of neuroscience, Buzsáki argues, not mapping human assumptions on to neuronal observations.

I made a similar distinction 15 years ago. I called the view of cognition grounded in ideas originating in human experience and reflection the anthropogenic (human-born) approach, what Buzsáki calls ‘outside-in’. Although cognitivism asserts that cognition can be realised in different physical forms (including robots), the approach remains anthropogenic because it derives from the human capacity to compute numbers. The contrast case is what I call the biogenic (life-born) approach, which privileges the biological mode of existence as the source of cognition and entails the ‘inside-out’ view.

If understanding human cognition is the goal, then a biogenic/inside-out approach is the most promising path to take us beyond this geriatric shuffle on a road to nowhere. Given the massive investment of public and private funds, to say nothing of human ingenuity, time and effort over the past 70 years, we should by now know so much more about what cognition is, what it’s for, and how it works – theories of these things, not simply data derived from brain activity. Think of how society has transformed since the 1950s. How many dogmas have crashed and burned? How much has been learned in so many fields?

We can see ourselves – with scientific justification – in a daffodil, an earthworm, even a bacterium

The Human Genome Project was supposed to yield a genetic ‘blueprint’. Instead, when the first draft was published 21 years ago, we learned how much we didn’t know. For example, there is too much so-called ‘junk DNA’ for it actually to be junk; human beings share a surprising proportion of genes with plants; and our genome carries genes transferred by single-celled organisms. As whole genomes increasingly began to be compared, we discovered that among the genes shared with plants were some involved in the central nervous systems of animals, including us. Also, a system among the most critical to human survival – immune defence mechanisms operating autonomously at the level of individual cells – was inherited from bacteria or its sister kingdom, archaea, possibly billions of years ago. What’s this got to do with cognition? Well, it turns out that normal functioning of memory and learning depends on the interaction with neurons of immune-stimulating elements (cytokines). A. Real. Surprise. In the 1950s (and later), the brain was considered ‘immune privileged’; the immune system couldn’t operate there.

Yet we still don’t have a good grip on the fundamentals of cognition: how the senses work together to construct a world; how and where memories are stored long term, whether and how they remain stable, and how retrieval changes them; how decisions are made, and bodily action marshalled; and how valence is assessed.

Valence is the value an organism imputes to circumstances within itself and/or its surroundings as advantageous, threatening or neutral. The core role of valence in emotions is well established. Consensus is now forming that human emotions are fundamentally involved in the body’s regulation of its basic functioning. For nearly 50 years, we’ve known that bacteria migrate toward certain substances (advantage) and seek to evade other circumstances (harm). Could understanding the mechanisms of valenced bacterial behaviour shed any light on how emotions generate behaviour in more complex organisms? We’ll never know unless we look.

At the end of Origin, Darwin describes a ‘tangled bank’ where the laws of natural selection play out in the evolution and current behaviour of plants and animals that appear so different from one another as to be utterly unrelated, but are not, and which depend upon one another for life. At a deep level, Darwin suggests, all living things are related. We know that now in ways Darwin could only imagine, because we have incomparably more sophisticated tools and a far richer understanding of how evolution works that includes developmental plasticity, epigenetics and whole-genome change, which provides – in addition to mutations of single genes – heritable variation for natural selection to act upon.

‘There is grandeur in this view of life,’ Darwin writes, and he is correct. We can now see ourselves – with scientific justification and with no need for mystical overlay or anthropomorphism – in a daffodil, an earthworm, perhaps even a bacterium, as well as a chimpanzee. We share common origins. We share genes. We share many of the mechanisms by which we become familiar with and value the worlds that our senses make. We are all struggling for existence, each in our own way, dependent on one another, striving to survive, to thrive and (for some) to reproduce, on this planet we share – which is not the centre of the Universe, or even the solar system, but is the only home any one of us has.

Just as we have come to think of our bodies as evolved from simpler forms of body, it is time to embrace Darwin’s radical idea that our minds, too, are evolved from much simpler minds. Body and mind evolved together and will continue to do so.

This Essay was made possible through the support of a grant to Aeon+Psyche from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon+Psyche are not involved in editorial decision-making.