Living things are so impressive that they’ve earned their own branch of the natural sciences, called biology. From the perspective of a physicist, though, life isn’t different from non-life in any fundamental sense. Rocks and trees, cities and jungles, are all just collections of matter that move and change shape over time while exchanging energy with their surroundings. Does that mean physics has nothing to tell us about what life is and when it will appear? Or should we look forward to the day that an equation will finally leap off the page like a mathematical Frankenstein’s monster, and say, once and for all, that this is what it takes to make something live and breathe?
As a physicist, I prefer to chart a course between reductionism and defeat by thinking about the probability of matter becoming more life-like. The starting point is to see that there are many separate behaviours that seem to distinguish living things. They harvest energy from their surroundings and use it as fuel to make copies of themselves, for example. They also sense, and even predict things about the world they live in. Each of these behaviours is distinctive, yes, but also limited enough to be able to conceive of a non-living thing that accomplishes the same task. Although fire is not alive, it might be called a primitive self-replicator that ‘copies’ itself by spreading. Now the question becomes: can physics improve our understanding of these life-like behaviours? And, more intriguingly, can it tell us when and under what conditions we should expect them to emerge?
Increasingly, there’s reason to hope the answer might be yes. The theoretical research I do with my colleagues tries to comprehend a new aspect of life’s evolution by thinking of it in thermodynamic terms. When we conceive of an organism as just a bunch of molecules, which energy flows into, through and out of, we can use this information to build a probabilistic model of its behaviour. From this perspective, the extraordinary abilities of living things might turn out to be extreme outcomes of a much more widespread process going on all over the place, from turbulent fluids to vibrating crystals – a process by which dynamic, energy-consuming structures become fine-tuned or adapted to their environments. Far from being a freak event, finding something akin to evolving lifeforms might be quite likely in the kind of universe we inhabit – especially if we know how to look for it.
The understanding that life and heat are intertwined is very old knowledge. Moses, for one, was launched into his first encounter with the Creator of all life by the sight of a tree ablaze, burning with a marvellous fire that left the living organism unscathed.
In physics, heat is a form of energy, made up of the random movements and collisions of molecules as they bounce off each other at the nanoscale. Much of the world’s energy is tied up as heat. Although it sounds like something that just wobbles around in the background as other factors take centre stage, it actually plays a crucial role in making some of the most interesting kinds of behaviour possible. In particular, we’ll see that heat and time are bound together in an intricate dance, and the release of heat is what stops time going backwards.
Some things in the world seem reversible: I can kick a ball upward and it will rise, or I can drop a ball from a height, and it will fall. Putting it this way just seems like common sense, but it turns out that this pairing of dynamical trajectories, where one path looks like the time-reversed movie of the other, is a symmetry built into the basic mathematical structure of Newton’s laws. Anything that can go one way can go the other, if you just set it moving back the way it came. As a consequence, the most ‘normal’ thing in physics would be for events to be able to reverse themselves in time, just like the ball that goes up and then down.
We don’t immediately grasp the sweeping significance of time-reversal symmetry because a whole lot of what we see doesn’t seem to have this property. Little green shoots soak up the Sun and grow into mighty trees, but we never see a full-grown pine ‘ungrow’ itself into a cone buried in the dirt. Sandcastles disintegrate under the waves, but we never see them splash back together when the tide recedes. Countless examples of ordinary occurrences around us would look extraordinary if they happened in rewind. The ‘arrow of time’ seems to point in one direction, but there’s no obvious reason in principle to think it should. So what’s going on?
The short answer is that we’re not looking closely enough. When a piece of wood burns, an enormous amount of heat and chemical product is exchanged with the surrounding air. In order to run the tape backwards and spontaneously generate wood from ash and anti-flame, we’d have to somehow give every little molecule in the ash and atmosphere a backwards push to send it bouncing along the reverse track. That is not going to happen.
In a rigorous, quantitative sense, the dissipation of heat is the price we pay for the arrow of time
Many scientific commentators have noted the connection between heat and the arrow of time. However, only in the past 20 years or so have physicists developed a crisp, comprehensive formulation of the relationship. One of the most important contributions came from a theorist named Gavin Crooks, now at the Lawrence Berkeley National Lab in the United States. He asked the following question: given that I have a movie (say, of a piece of wood burning to ash or a plant growing) and the rewind version of that movie, how would I tell which one is more likely to happen?
By applying some basic assumptions, he was able to mathematically prove the following. If you have a system (a piece of wood or a plant, for example) surrounded by a ‘bath’ of randomly jiggling particles (say, the atmosphere), the more heat the system releases into its bath, the less likely it is to rewind itself. In a rigorous, quantitative sense, the dissipation of heat is the price we pay for the arrow of time.
Why? Another way of phrasing this insight is to note that the more a system increases the entropy of its surroundings, the more irreversible it becomes. Now, it must be said that in the grand contest for the most misunderstood idea in the history of physics, entropy is probably the winner. Even people who are normally averse to any mention of the natural sciences will sagely volunteer that entropy – read: messiness, dysfunction, chaos, disorder, who knows? – must increase, all the time. It’s the second law of thermodynamics, obviously. But this simple picture can’t be right. Living organisms, for one, seem to defy this misleading gloss on the second law. They take disorganised bits and pieces of matter, and put them together in fiendishly complex and refined ways.
Thankfully, the full story is substantially more nuanced. Connoisseurs use entropy in a technical, microscopic sense, as a statistical measure of the number of different ways the same kind of arrangement of matter can be constructed out of its constituent parts. For a room full of air, for example, it turns out there are just many, many more ways of spreading out the molecules uniformly than there are of squishing them into clumps. That’s why uniform air density wins the entropic game, and nature abhors a vacuum. The particles diffuse themselves evenly because that’s just the most likely thing to happen over time.
The connection between entropy and heat is more subtle. Remember that heat is energy diffused randomly among the particles in a substance. The more energy, the more ways of sharing it around; and the more ways of sharing the energy around, the higher the entropy. Back to Crooks’s example of a system in a bath, then, the more heat a system releases, the more it increases the entropy of its surroundings – and, as Crooks showed, the less likely it is that this sequence of events could rewind itself.
This is what the second law means: the reason a heat-producing movie is more likely than its heat-absorbing re-run has to do with the number of ways you can disperse that heat in the surrounding bath. The more heat you throw into the bath, the less hope you have of getting it back from a freak fluctuation, and the less likely it is that you will have the energy you need to retrace your steps once the movie has run forward. It’s like releasing a bagful of feathers into a gusting wind and hoping to catch them with a net. If you only release one feather, the gale might blow it back to you; but if you release hundreds or thousands, the chance of capturing them all is basically nil.
Now we can bring life into the picture. Living things clearly have energy to burn, and they get this energy from being worked on. Like heat, ‘work’ in thermodynamics involves units of energy. But instead of the uncoordinated wiggling of molecules, here it’s a measure of how much and how fast energy has been transferred to a system from its surroundings in a way that produces a change. There are a variety of versions, such as movement, volume change and chemical transformation. What unites these processes is that energy is being forced, pushed or driven into a system from the outside, in a way that modifies the system’s shape or location. When you hit a car, it might move, or you might dent it, or both. In any case, you’ve done work on it.
Life is superb at capturing energy through work. Growing a plant means doing work on it, no less than when we put shoulder to yoke and drag a cart up a hill. In these situations the conservation of energy required by Newton’s laws implies one of two things: either all the energy put in as work stays stored in the system, like the compressed spring in a Jack-in-the-box; or else it’s released into the surroundings as heat. Recall, too, what we said before about the release of heat and time-reversal symmetry. So the question of how much work gets done, and when, makes all the difference to which events are more or less likely in the movie we’re watching.
Now we know why mighty trees don’t ungrow themselves: because life produces heat. From a physics perspective, a tree harvests energy from its surroundings – work is done on it – and in the process, it dissipates energy to the surrounding air as heat. The differences in probability between forward and reverse in such cases are staggering. Even ungrowing a single photosynthetic bacterium is less likely than growing one by a factor of roughly 1050 billion! Suffice it to say, once the work gets flowing (and dissipating), backwards movies usually cease to be worth even talking about.
With a few tricks of algebra, you can use Crooks’s equation to compare the likelihoods of two future events in a system that’s being pushed by external forces and surrounded by a bath of randomly jiggling molecules. That includes the plant growing in the air, and anything that’s alive, in fact. So, if I zap a chemical mixture with electric shocks, or mechanically vibrate the container of a viscous fluid – does thinking about work and heat help me to predict if something resembling life might eventually emerge, after some energy has been allowed to flux through the system? Perhaps, but with a twist.
To probe the implications of work for how life (and evolution) evolved, a more versatile analogy is required. Let’s imagine a battery-powered car, exploring a rugged mountain range. Mathematically, the car’s location can be thought of as corresponding to the full microscopic configuration of a system composed of many different particles. Every spot on the terrain that the car might be, we can think of as a unique and different way of arranging all the molecular building blocks of some larger object. Accordingly, we have to think of the car not as having four cardinal directions to drive in, but rather, 1025 or more! And somewhere, out on that vast sierra, there’s a spot that represents a bacterium, a plant, a cat.
At any given moment, our car is furiously spinning its wheels, winding its way slowly up over a narrow pass, or bouncing rapidly down into another ravine. From time to time, the car randomly swerves and changes direction. This is a reasonable metaphor for a system that undergoes changes in energy, but doesn’t experience external drives that do work on it. Sometimes, the car goes uphill; this corresponds to our system absorbing heat and storing the energy, like the spring in a jack-in-the-box. Sometimes, the car goes downhill, which we’d liken to the clown popping out of the box as the spring is released.
So where does the exploring car end up? Both intuition and a more rigorous treatment of the physics tell us that two basic factors are going to affect what happens. First, the car is more likely to drive to places that are close to its starting point, and separated from that point by relatively flat terrain. Second, it will tend to go downhill more than it tends to go uphill. After a very long time, we might expect the car to wander so much that we’d have no idea where it was at the beginning – but its avoidance of hilltops and preference for valleys would probably remain.
Think of an opera singer who shatters a goblet with the perfect pitch of her song, due to a phenomenon known as resonance
To bring work into the picture, we just need to give the car a solar panel. This makes its wheels spin more vigorously when it’s positioned and angled so that the Sun is brightest. Now the rules of thumb for how the car explores are going to get dramatically more complicated. All things being equal, we’d still expect the car to stay close to home, go downhill, and avoid rugged terrain (at least until it gets stuck). In addition, we now have to think about the places and times that the car will get a power-boost from the Sun overhead. There are going to be cases where the car can more readily traverse a sunny hill than a shady plain, because of the extra help it gets by staying in the bright spots.
Given enough time, we can no longer be confident that we’ll find the car in some deep valley near home base; instead, we have to think about how far and how fast it might have travelled if it found a path on which the Sun kept shining. Described in this way, the vehicle’s dynamics are affected by a dizzying variety of factors, and there are many more possibilities for where the mountain-rover might go.
The solar-powered mountain-rover metaphor helps us to think about the evolution of a very diverse range of work-absorbing systems. Of course, the prospect of sifting through such a vast space of possibilities and landing on life at first seems hopeless. But things look different once we ask a simple question, namely: what determines which places are sunny, and which places aren’t?
At least part of the answer comes from the peculiarities of how a system’s structure allows it to connect with its surrounding energy source. Children often notice that a wineglass will ring at a different pitch depending on how much water is poured into it. A different, but related observation is that vessels made from the same amount of glass, and filled with the same amount of water, can ring at different pitches depending on their shape.
What this reveals is that the way matter is arranged can significantly affect how it tends to move and vibrate. Not only that, but the details of such an arrangement also change how matter absorbs work energy from its surroundings. Think of an opera singer who shatters a goblet with the perfect pitch of her song, due to a phenomenon known as resonance. Here, because the glass tends to vibrate at a frequency that is well-matched to the frequency of the sound, the oscillations in the glass produced by the energy in the sound waves are violent enough to break it.
We encounter the work-absorbing peculiarities of how matter is arranged all around us: from the ways pigment molecules absorb and scatter light so that we perceive them as having colours, to the fact that we can digest and be nourished by the starch in a potato more than by the cellulose in a bale of hay. From the perspective of chemical physics, a human being’s inability to eat grass is just about how the atoms that comprise a person’s digestive system are arranged. If these same carbons, nitrogens, oxygens and so on were re-fashioned into a cow stomach, the chemical work stored in grass would be ours for the taking.
It’s when we take this idea back to our solar-powered rover that things get interesting. Suppose we start with a collection of chemical building blocks in a thrown-together, uninteresting structure. That corresponds to parachuting the car into a randomly chosen starting location in the mountain range. But now, suppose that we subject these chemical building blocks to a challenging external environment – to a collection of energy sources that are accessible in principle, but only available in practice when the chemicals are arranged in rare, specially-matched shapes that happen to solve the problem of how to absorb work. For the rover, which we have said has unimaginably many possible directions to drive in, the challenging environment manifests as a landscape that’s mostly not very sunny, except when you are driving in just the right direction, in the right place, at the right time.
The system exhibits a self-reinforcing process that grows its ability to absorb work
Sure, it’s still not easy to tell where the rover must go in general. But there are particular scenarios where the matters become significantly clearer. We might think of a case in which the rover starts off in a sunny spot, spins its wheels furiously, and speeds to a new place in the shade, where its wheels grind mostly to a halt. Having been carried irreversibly to a new place by the absorption and dissipation of work, it then gets stuck in a shape that is bad at absorbing energy. That’s roughly equivalent to the opera singer shattering the goblet. At the beginning, the glass resonates and absorbs a lot of work from the song, which gets largely dissipated as heat when the glass shatters and settles into an inert heap of shards. Once in this state, the shards no longer resonate, and the rate of work absorption drops significantly.
We can also envision the opposite scenario. Suppose we have a single bacterium sitting in a big jar of food and oxygen. After 20 minutes or so, we should have two bacteria, and 20 minutes later gets us two more. What we expect to see, in the short term, is a process of exponential population growth. Individual bacteria harness the chemical work available in their surroundings, and pay the thermodynamic cost of making copies of themselves. Since the number of bacteria is growing, the rate of work absorption is also constantly increasing – at least until the food runs out and the party stops. We can liken this process to a rover that gets a bit of sunshine, which helps it edge its way a bit further out of the shade, so that its wheels speed up even more and carry it to an ever-sunnier location over time. The system in this case exhibits a sustained, self-reinforcing process that grows its ability to absorb work from the environment.
Note that there’s nothing in this thermodynamic description of reproduction that specifically picks out the notion of a discrete entity (such as a bacterium) reproducing itself. Rather, self-replication is just one example of a more general class of processes that exhibit what we call positive feedback. Positive feedback can happen whenever there’s a quantity in a system whose increase brings about a rise in its own rate of growth. In the case of self-replicating cells, the quantity in question is the number of cells itself: a larger number of cells can make more cells faster. However, one can also envision self-reinforcing behaviours that have to do with the shape or arrangement of a system as a whole; and in that case, the exploring rover story remains the same as ever. Looking at life this way allows us to recognise a similar feedback signature in cases where no self-copying self is apparent.
Just to recap where we’ve travelled. Living things manage not to fall apart as fast as they form because they constantly increase the entropy around them. They do this because their molecular structure lets them absorb energy as work and release it as heat. Under certain conditions, this ability to absorb work lets organisms (and other systems) refine their structure so as to absorb more work, and in the process, release more heat. It all adds up to a positive feedback loop that makes us appear to move forward in time, in accordance with the extended second law.
This process takes on a special significance in a setting like that of the vibrating glass. Here, the environmental energy source presents a particular challenge, such that the system (the glass) can only absorb energy if it adopts the right shapes. That’s equivalent to our rover finding that rare sliver of sunlight and managing to drive in just the right way to stay in the bright spots. If something about the system’s configuration lets it use the absorbed energy to power a feedback loop in a challenging scenario, you end up with a recipe for a system that evolves over time into more and more finely-tuned, specialised, energy-absorbing shapes. If you leave a lump of glass in the presence of a soprano for long enough, the shape it ultimately takes should depend on the precise pitch(es) she chooses to sing at.
In my research group’s first theoretical papers on this subject, we have referred to this mechanism of self-organisation as dissipative adaptation. Recently, we conducted two tests of the idea with computer simulations. In one study, we took a mixture of simple dots or points floating in viscous fluid. To make the environment more challenging, we imposed a simple rule: each pair of points was connected by a stretchy spring, which could randomly hook or unhook when close together. We then took one of the points amongst a group of 20 of them and pushed on it with an oscillating force of a single frequency.
What we saw next was intriguing. As the springs randomly hooked and unhooked, a specific network of tangled connections formed. These connections tended to vibrate at the frequency of the external force – hence they absorbed an exceptionally large amount of energy. Alternatively, when we engineered it so that the springs snapped more readily when stretched, we saw the opposite effect, like the opera singer’s shattered glass: a network formed that was attuned to not vibrate at that frequency. That is, the points adapted their shape to not absorbing energy.
the life-like specialness of organisms, which allows them to eat and survive and reproduce, might be recognisable in a broader physical class of systems
We got similar results in a second study. Here we put an initially randomly arranged collection of atoms in the presence of a rich but challenging source of energy that could only be accessed by a special combination of those atoms. After letting the atoms react for a long time, the composition of chemicals was biased to be either unusually bad or extremely good at extracting energy. In other words, the system exhibited a tendency to find and stay stuck in states that look adapted to their environment.
In both these cases, the point is not that all matter everywhere is trying to absorb and dissipate more energy all the time; nor is it that the second law of thermodynamics is magically guiding the discovery of organised structures that are better at increasing entropy. Rather, when particles interact under the challenging conditions created by an energy source, their resulting shapes tend to be fine-tuned to that energy source – even without the help of self-replication and natural selection.
As it happens, living things are both marvellously complex and breathtakingly good at meeting the challenges of their environments. We know this is because the life we see today has inherited many of the structural and behavioural adaptations that proved so useful to previous generations. In the biological context, ‘usefulness’ is that which enables survival and self-reproduction. But what’s beginning to emerge from some of this thermodynamic thinking – and what a few of us are eagerly exploring in simulation and experiment – is the possibility that some of the distinctively life-like specialness of how organisms are organised, and which allows them to eat and survive and reproduce, might be recognisable in a broader physical class of systems that do not contain self-copying selves. Instead, they are propelled towards strikingly special shapes by the thermodynamic laws governing positive feedback in the presence of a challenging energy source. This process might explain how evolution can get going in inert matter.
Whether this will ultimately make a big or small difference in how we understand living things at the microscopic level, we don’t know. There’s still more work to be done. But what our new vantage point on thermodynamics reveals is that a great many uncharted, and seemingly random, explorations of shape and form have a surprisingly good chance of ending up somewhere interesting – perhaps even the summit of the very distant mountaintop that we occupy on that unimaginably huge terrain, with a tiny flag reading ‘humanity’.