/
/
Aeon
DONATE//
/
Photo of a lone bench facing turbulent ocean waves under a cloudy sky. The area is partially flooded by the sea.

The model of catastrophe

The immense complexity of the climate makes it impossible to model accurately. Instead we must use uncertainty to our advantage

by David Stainforth 

Listen to this essay

Today’s complex climate models aren’t equivalent to reality. In fact, computer models of Earth are very different to reality – particularly on regional, national and local scales. They don’t represent many aspects of the physical processes that we know are important for climate change, which means we can’t rely on them to provide detailed local predictions. This is a concern because human-induced climate change is all about our understanding of the future. This understanding empowers us. It enables us to make informed decisions by telling us about the consequences of our actions. It helps us consider what the future will be like if we act strongly to reduce greenhouse gas emissions, if we act only half-heartedly, or if we take no action at all. Such information enables us to assess the level of investment that we believe is worthwhile as individuals, communities and nations. It enables us to balance action on climate change against other demands on our finances such as health, education, security and culture.

For many of us, these issues are approached through the lens of personal experience and personal cares: we want to know what changes to expect where we live, in the places we know, and in the regions where we have our roots. We want local climate predictions – predictions conditioned on the choices that our societies make.

So, where do we get them? Well, nowadays most of these predictions originate from complicated computer models of the climate system – so-called Earth System Models (ESMs). These models are ubiquitous in climate change science. And for good reason. The increasing greenhouse gases in the atmosphere are driving the climate system into a never-before-seen state. That means the past cannot be a good guide to the future, and predictions based simply on historic observations can’t be reliable: the information isn’t in the observational data, so no amount of processing can extract it. Climate prediction is therefore about our understanding of the physical processes of climate, not about data-processing. And since there are so many physical processes involved – everything from the movement of heat and moisture around the atmosphere to the interaction of oceans with ice-sheets – this naturally leads to the use of computer models.

But there’s a problem: models aren’t equivalent to reality.

So, what can we do? One option is to make the models better. Make them more detailed and more complicated. That, though, raises an important question: when is a model sufficiently realistic to predict something as complex as climate change? When will the models be good enough? We don’t have an answer to this question. Indeed, scientists have hardly begun to study this problem, and some argue that these models might never be sufficiently accurate to make multi-decadal, local climate predictions.

Nevertheless, changing the way we use ESMs could provide a different and better way to generate the local climate information we seek. Doing so involves embracing uncertainty as a key part of our knowledge about climate change. It involves stepping back and accepting that what we want is not precise predictions but robust predictions, even if robustness involves accepting large uncertainties in what we can know about the future.

Before delving into the details of how we really can map local futures, I want to be clear about the wider context. This essay critically discusses issues relating to the robustness of current methods in climate change science. By doing so, it runs the risk of being interpreted as downplaying the reality or seriousness of human-induced climate change. This would be entirely the wrong conclusion. My understanding of the physics of the climate system tells me that the reality of human-induced climate change and its seriousness for our societies and cultures is beyond reasonable doubt. Indeed, in my book Predicting Our Climate Future (2023), I provide a short, 11-paragraph summary of why high-school-level science combined with some commonsense reflections on the societal consequences of a changing climate are enough to demonstrate the seriousness and scale of the threat – enough to justify transformative action. Accepting the reality and importance of the issue, however, does not mean that there aren’t many aspects that are open to question. Climate change science and social science are fields of ongoing research, so we should expect areas of debate and disagreement.

It’s easy to pick holes in computer models. As the statistician George Box said: ‘All models are wrong, but some are useful.’ The real question is whether these models are useful. And that depends on the problem you’re trying to solve. To see why, we need to delve a little into how these models work.

ESMs are fantastic achievements of modern computer-based science and tremendously useful tools for research. They break up the atmosphere and oceans into grid boxes – typically with sides around 20 to 100 km long – and, for each grid box, the computer code solves a set of well-founded and well-understood equations from classical physics that describe how fluids such as air and water behave. This is known as the ‘dynamical core’ of the model. It’s reasonable to say that this aspect of the models is based on physics.

Just because they’re great research tools doesn’t mean they’re reliable engines of prediction

But the dynamical core isn’t enough to simulate Earth’s climate system. That’s because there are many critical processes that we don’t fully understand and for which we don’t have robust and well-tested mathematical representations, eg, the behaviour of tropical and temperate forests. There are also many processes, such as clouds, atmospheric convection, and ocean eddies, which take place on scales that are much smaller than the grid boxes and yet are tremendously important for the behaviour of Earth’s climate. These processes need to be included in the models if they are to be able to simulate climate change. But we can’t represent what is actually going on because we either don’t know or it’s computationally infeasible – or both. So they are included in the models through pieces of code called ‘parameterisation schemes’. These parameterisation schemes aren’t representations of the physics of what’s happening but statistical characterisations of how each process affects and interacts with the rest of the model. Clouds, ocean eddies and many land surface systems are examples of processes represented this way.

This difference between the physics-based ‘dynamical core’ and the statistically fitted ‘parameterisations’ may seem a rather dry and uninteresting issue, best left to the specialists. In practice, this difference is central to debates over what climate science can tell us about the future.

Before getting to that though, it’s worth reflecting on what tremendous achievements these models are. They include everything from atmospheric dynamics to ocean circulations, sea ice and a range of land-surface processes. They are hugely valuable research tools for studying how different parts of the climate system interact. In George Box’s terms, they may be wrong, but for many areas of research they are certainly useful.

However, just because they’re great research tools doesn’t mean they’re reliable engines of prediction. To be useful research tools, they need only represent some key characteristics of the processes we’re studying. But to provide reliable, multi-decadal predictions of the local consequences of climate change, they need to represent all the diverse interacting processes that we think could be important on climate change timescales. And they need to do this in a realistic fashion. They need to represent what we actually think is going on. These models may not need to be perfect – all models are wrong – but they certainly need to be close to reality for all relevant aspects of the climate system.

There is no significant disagreement in the climate research community that the models have significant flaws

Unfortunately, on multi-decadal timescales, almost everything can influence almost everything else. Changes in Arctic sea ice could influence the Indian summer monsoon. Changes in rainfall in the North Atlantic could influence temperature patterns in central Africa. ‘All relevant aspects is an awful lot of things.

Furthermore, today’s Earth System Models are not ‘close to reality’. In many ways, they are very different from reality. When we use these models to simulate the past, their outputs are far from being consistent with what happened in the real world. For large regions such as central North America or central Europe, these models can be a few degrees warmer or colder than was actually the case. This means, for instance, that modelled vegetation will diverge substantially from what was observed, or if it does align well with observations then this has to be for the wrong reasons. This matters. It means that the model isn’t reflecting the processes that are actually taking place and therefore any predictions of change will be flawed. In practice, we know this is the case anyway because many processes are missing from the models, and those represented by parameterisation schemes often don’t reflect the underlying behaviour. The upshot is that these models are too different from reality to enable reliable, multi-decadal projections on regional and local scales.

This has led to a dissonance in the climate research community. For those wanting to study the regional consequences of climate change, and for those wanting to support society’s efforts to build climate-resilient systems, these models provide data that looks like the predictions they seek. That is why such simulations are widely used for these purposes. And yet the unreliability of these models for multi-decadal predictions is also widely acknowledged – there is no significant disagreement in the climate research community that the models have significant flaws. Where there is disagreement is on what we should do about it: on how we can get better information about the future of Earth’s climate.

There are two camps. One takes the perspective that today’s models are inadequate to the task, so we need to make them better. The other takes the view that making them better is of little value until we understand how much better they need to be. And, since we have no way of knowing when a computer model would be good enough to make reliable, multi-decadal predictions of local climate, this camp tends to focus on the need for cleverer ways of using both our models and our scientific understanding.

The first camp is well coordinated. At a summit in Berlin in July 2023, there was a call for a new international initiative on climate modelling. The initiative, named Earth Virtualisation Engines (EVE), would cost roughly $15 billion in its first 10 years. Most of this money would be focused on making a step change in the resolution of ESMs, reducing the size of the grid boxes down to roughly 1 km on each side.

Such high-resolution models would provide information that is more ‘local’: they would be able to differentiate between different suburbs of modelled-Sydney, or between modelled-Calgary and modelled-Banff. But the justification for such high-resolution models is not principally about this extra detail. Rather, it is about reliability. Remember those parameterisation schemes? In the modelled atmosphere within the ESMs, these schemes include critical processes such as convection, which is extremely important for moving heat and moisture around the climate system. The argument of those who want to make models better by increasing their resolution is that processes currently represented by statistically fitted parameterisation schemes will instead arise out of the physics-based dynamical core. That is, processes such as convection will be ‘resolved’ so the model becomes a much better representation of scientific understanding. This is of fundamental importance because a model based on scientific understanding is more likely to be able to extrapolate: to represent the changing way the climate will behave in the future even if we have no observations of it behaving that way in the past.

Making climate predictions requires models that are close to reality in a way that weather forecasts are not

It’s a good argument. Higher resolution may indeed make it possible to remove some parameterisations. For other phenomena, such as clouds, increasing the level of detail might enable models to more closely represent our physical understanding of the underlying processes. These improvements will enable scientists to study additional aspects of the many complex interactions in the climate system. So, increasing the resolution of Earth System Models will create better, or at least additional, tools for some types of research. But will it produce more reliable climate forecasts?

The answer, in my opinion, is no. It won’t. This places me squarely in the opposing camp. The resolution may be higher in models with finer grids, but there will still be many processes missing – processes that are known to be important, such as aspects of atmospheric chemistry and ocean biogeochemistry. Despite the step change in resolution, these models will also still need statistically fitted parameterisation schemes when representing everything from land-surface effects such as vegetation to many processes in the atmosphere, ocean and cryosphere. Simply put, the models will still not be close to representing our understanding of reality.

As an aside, it is worth noting that making climate predictions requires models that are close to reality in a way that weather forecasts are not. That’s partly because multi-decadal forecasts require us to include almost all climatic processes as almost all of them could affect the outcome. By contrast, when making weather forecasts, many processes can be omitted because the associated components, for instance ocean temperatures or land cover, don’t vary much on the short-term weather timescales of days or weeks. The other important difference is that we can measure the reliability of weather forecasts through the regular cycle of predictions and observed outcomes. We can’t do that with climate forecasts so our trust in climate predictions relies primarily on the realism of the models – on our belief that they represent the current scientific understanding of how the Earth system operates.

So, if our models aren’t sufficiently close to reality and if higher resolution isn’t the answer, what’s the approach of the other camp? Who are they and what are they calling for?

Well, the other camp isn’t really a coherent group. They comprise a miscellaneous collection of academics from a variety of disciplines who see the problems described above and are looking to find different ways forward. They tend to think about the future by focusing on what could happen rather than what will happen.

According to this group there are two ways forward.

The first involves stories. A ‘storyline’ approach builds on our understanding of a specific part of the climate system: for example, the Indian summer monsoon, the drivers of tropical cyclones, or the weather patterns that lead to extreme floods in Northern Europe. This approach then involves describing how that part of the system could plausibly change in the future. Of course, the climate system is a massive collection of diverse components, and they all affect each other, so there is inevitably a need to bring in expertise on other parts of the system, but in storyline approaches this is done through the lens of the aspect of interest. For instance, questions about ocean circulation changes might be focused only on their characteristics in so far as they may specifically affect extreme rainfall events in Europe.

The real key to useful climate storylines, though, is that they should always come in packs

It’s important to recognise that not anything goes in this ‘storyline’ approach. It is not a matter of picking narrative possibilities out of thin air. Rather, the changes need to be consistent with our scientific understanding of the processes involved; we need to be able to describe, in a credible way, how different types of change might come about. So long as we can do that, storylines provide a route to exploring the range of future climatic behaviours that we should consider when making societal decisions about climate change. They can be designed to consider specific locations and issues, and they focus on leveraging scientific expertise and understanding rather than computer models.

The real key to useful climate storylines, though, is that they should always come in packs: collections of narratives that together reflect the uncertainty of what could happen in a specific region or for a particular type of climatic event. The researchers who build them must deeply reflect on the widest range of climatic behaviour that is credible in the particular aspect of interest.

One example of this approach is a study from 2018 on how the Indian summer monsoon might change with global warming, and how those changes might impact water resources in southern India. I worked on this study as part of a team led by Suraje Dessai, professor of climate change adaptation at Leeds University in the UK. The project illustrated how scientific reflection on potential interactions in the climate system provide a route to characterising uncertainty in the socially important consequences of climate change.

The project also brought home to me an interesting sociological barrier to the storyline approach. It’s a barrier that arises from the dominance of complex computer models in climate change research. Because of this dominance, we should expect expert opinion to be anchored to the behaviour seen in the models, rather than simply reflecting an experts’ understanding of the processes of the climate system. During our research, we had to work hard to persuade specialists that we wanted to learn from their expertise on the processes of the Indian summer monsoon, rather than simply their knowledge about how it responds in models.

Storyline approaches are not therefore a simple answer. They need to be designed very carefully, and they are expert-intensive rather than computer-intensive, so to scale them up would require substantial investment in the training and accreditation of a large cohort of individuals with the necessary expertise. Despite this, I am a fan because they so clearly build on what we understand.

Many alternative ‘model versions’ will be as credible as the original model but might show us different possible paths

There is, however, an alternative way to probe what could happen in the future, which would still utilise ESMs. Rather than continually making these models better – chasing the illusory prospect of a model that’s sufficiently realistic to make trustworthy climate forecasts – we could instead push them to respond differently to increased atmospheric greenhouse gasses. Our aim should be diversity of response. We want lots of models that together represent just how differently the future could play out at the scales that matter to us. This approach means designing our collection of models to be useful as a whole, by using them as a collection rather than focusing on individual models and examining the implications of each one in turn.

The idea here is that, while all ESMs are too different from reality to be relied upon to make climate predictions individually, they all nevertheless encapsulate key features and characteristics of the real-world climate system. That’s why they’re good research tools. If we tweak them by changing the assumptions we make in the parameterisation schemes, for instance, we can create alternative versions. And many of these alternative ‘model versions’ will be as credible as the original model but might show us different possible paths for how climate change could play out in the future, particularly at local scales. Furthermore, if we find that we can’t push our models to achieve certain types of responses, then we have a new type of information: a new line of evidence that suggests, perhaps, that Earth (or any world even vaguely like our own) can’t respond in a certain way.

This second way forward isn’t just speculation. Collections of model versions have been generated by a number of projects in the past 25 years. They are known as ‘perturbed physics ensembles’ and the biggest by far is an experiment run by CPDN, the climate prediction project launched in 2003 out of Oxford University. That experiment was run as a ‘public-resource distributed-computing’ experiment: members of the public volunteered their computers to run a model version when they weren’t using it for other things. For full disclosure, the climatologist Myles Allen had the original idea for the project but I was a co-founder – so it’s unsurprising that I have a particular interest in, and fondness for, these experiments.

The CPDN experiment generated new model versions that responded differently to increasing greenhouse gases. And some clear messages have arisen from the collection of model versions it created. For instance, all 6,203 model versions that passed some basic consistency checks for relevance showed global warming – that’s to be expected, it’s basic physics – and every one showed increasing winter rainfall in northern Europe and decreasing summer rainfall in the Mediterranean basin.

What this demonstrates is that clear messages can arise from the process of generating new model versions. But, of course, the process also generates large uncertainties. For instance, in the case of the CPDN experiment, the increase in northern European winter rainfall ranged from less than 10 per cent to more than 50 per cent, and the associated temperature increase in that region/season ranged from below 2oC to more than 8oC. Don’t take these numbers too seriously though; this experiment used an old model with a highly idealised ocean and a deliberately unrealistic scenario for changes in atmospheric greenhouse gases – deliberately unrealistic because the model was designed to address particular research aims related to the sensitivity of the global climate as a whole. The point here is simply the wide range of possibilities it generated rather than the absolute numbers.

What’s the way forward for understanding our climate future? Existing experiments, such as the CPDN project, have demonstrated how to use our models to explore future possibilities but so far there have been no experiments explicitly designed to examine the range of future climatic behaviours that should be considered at local scales. There have been no experiments targeted at generating model versions to inform societal decisions. Indeed, to do so with current ESMs would require huge computing resources and be very costly. But the prospect of such investments should not be dismissed – after all, the high-resolution EVE initiative is seeking billions of dollars for ESM research. Developing model versions to address the same problems targeted by EVE would be a better way to use such funds, were they available.

Exploring a diversity of possible responses rather than improving the resolution of existing models also changes the underlying scientific question in model-based predictions. Previously, this question was something like: ‘When is a model sufficiently realistic to predict future climate?’ With a diversity perspective, we need to ask instead: ‘When is a model too unrealistic to be considered informative about future climate?’ This is a profound and important change, which climate change science and social science need to embrace.

Our knowledge of uncertainty is also part of what we know about climate change

But focusing on high-resolution modelling is dangerous not only because we have no answer to the question of when a model is sufficiently realistic. Investing in this approach also means we don’t have the capacity to explore the uncertainties, which inevitably encourages overconfidence in the predictions that models make. This is a particular concern because Earth System Models are increasingly being used to guide decisions and investments across our societies. Overconfidence in model-based predictions therefore risks encouraging bad decisions: decisions that are optimised for the futures in our models rather than what we understand about the range of possible futures for reality.

By contrast, perturbed physics ensembles and storyline approaches focus on exploring and describing our uncertainties. Placing uncertainty front and centre is important. When we make an investment or a gamble, we don’t just base it on what we think is the most likely result. We consider the range of outcomes that we think are possible – ideally these are characterised by probabilities, although this isn’t always achievable. It’s the same with climate change. We should not only make plans based solely on our best estimate of what might happen. We should also consider the range of plausible outcomes we foresee. Our knowledge of uncertainty is also part of what we know about climate change. We should embrace this knowledge, expand it and use it.

If we understand the uncertainties well, we can bring our values to bear on the risks we are willing to take. Uncertainty therefore needs to be at the core of adaptation planning while also being the lens through which we judge the value of climate policy and the energy transition. In my view, climate researchers and modellers wanting to support society should focus on understanding, characterising and quantifying uncertainty, and avoid the trap of seeking climate models that make reliable predictions. They may well never exist.