Menu
Aeon
DonateNewsletter
SIGN IN

The view towards Milano Centrale station down via Vittor Pisani during lockdown, 29 March 2020. Photo by Nicolò Campo/LightRocket/Getty

i

Uncertain times

The pandemic is an unprecedented opportunity – seeing human society as a complex system opens a better future for us all

by Jessica Flack & Melanie Mitchell + BIO

The view towards Milano Centrale station down via Vittor Pisani during lockdown, 29 March 2020. Photo by Nicolò Campo/LightRocket/Getty

We’re at a unique moment in the 200,000 years or so that Homo sapiens have walked the Earth. For the first time in that long history, humans are capable of coordinating on a global scale, using fine-grained data on individual behaviour, to design robust and adaptable social systems. The pandemic of 2019-20 has brought home this potential. Never before has there been a collective, empirically informed response of the magnitude that COVID-19 has demanded. Yes, the response has been ambivalent, uneven and chaotic – we are fumbling in low light, but it’s the low light of dawn.

At this historical juncture, we should acknowledge and exploit the fact we live in a complex system – a system with many interacting agents, whose collective behaviour is usually hard to predict. Understanding the key properties of complex systems can help us clarify and deal with many new and existing global challenges, from pandemics to poverty and ecological collapse.

In complex systems, the last thing that happened is almost never informative about what’s coming next. The world is always changing – partly due to factors outside our control and partly due to our own interventions. In the final pages of his novel One Hundred Years of Solitude (1967), Gabriel García Márquez highlights the paradox of how human agency at once enables and interferes with our capacity to predict the future, when he describes one of the characters translating a significant manuscript:

Before reaching the final line, however, he had already understood that he would never leave that room, for it was foreseen that the city of mirrors (or mirages) would be wiped out by the wind and exiled from the memory of men at the precise moment when Aureliano Babilonia would finish deciphering the parchments.

Our world is not so different from the vertiginous fantasies of Márquez – and the linear thinking of simple cause-effect reasoning, to which the human mind can default, is not a good policy tool. Instead, living in a complex system requires us to embrace and even harness uncertainty. Instead of attempting to narrowly forecast and control outcomes, we need to design systems that are robust and adaptable enough to weather a wide range of possible futures.

Think of hundreds of fireflies flashing together on a summer’s evening. How does that happen? A firefly’s decision to flash is thought to depend on the flashing of its neighbours. Depending on the copying rule they’re using, this coordination causes the group to synchronise in either a ‘bursty’ or ‘snappy’ fashion. In her book Patterns of Culture (1934), the anthropologist Ruth Benedict argued that each part of a social system depends on its other parts in circuitous ways. Not only are such systems nonlinear – the whole is more than the sum of the parts – but the behaviour of the parts themselves depends on the behaviour of the whole.

Like swarms of fireflies, all human societies are collective and coupled. Collective, meaning it is our combined behaviour that gives rise to society-wide effects. Coupled, in that our perceptions and behaviour depend on the perceptions and behaviour of others, and on the social and economic structures we collectively build. As consumers, we note a shortage of toilet paper at the supermarket, so we hoard it, and then milk, eggs and flour, too. We see our neighbours wearing masks, so put on a mask as well. Traders in markets panic upon perceiving a downward trend, follow the herd and, to echo Márquez, end up causing the precipitous drop they fear.

These examples capture how the collective results of our actions feed back, in both virtuous and vicious circles, to affect the system in its entirety – reinforcing or changing the patterns we initially perceived, often in nonobvious ways. For instance, some coronavirus contact-tracing apps can inform users of the locations of infected persons so they can be avoided. This kind of coupling between local behaviour and society-wide information is appealing because it seems to simplify decision-making for busy individuals. Yet we know from many years of work on swarming and synchronicity – think of the flashing fireflies – that the dynamics of coupled systems can be surprising.

A recent study in Nature Physics found transitions to orderly states such as schooling in fish (all fish swimming in the same direction) can be caused, paradoxically, by randomness, or ‘noise’ feeding back on itself. That is, a misalignment among the fish causes further misalignment, eventually inducing a transition to schooling. Most of us wouldn’t guess noise can produce predictable behaviour. The result invites us to consider how technology such as contact-tracing apps, although informing us locally, might negatively impact our collective movement. If each of us changes our behaviour to avoid the infected, we might generate a collective pattern we had aimed to avoid: higher levels of interaction between the infected and susceptible, or high levels of interaction among the asymptomatic.

Complex systems also suffer from a special vulnerability to events that don’t follow a normal distribution or ‘bell curve’. When events are distributed normally, most outcomes are familiar and don’t seem particularly striking. Height is a good example: it’s pretty unusual for a man to be over 7 feet tall; most adults are between 5 and 6 feet, and there is no known person over 9 feet tall. But in collective settings where contagion shapes behaviour – a run on the banks, a scramble to buy toilet paper – the probability distributions for possible events are often heavy-tailed. There is a much higher probability of extreme events, such as a stock market crash or a massive surge in infections. These events are still unlikely, but they occur more frequently and are larger than would be expected under normal distributions.

Learning changes an agent’s behaviour. This in turn changes the behaviour of the system

What’s more, once a rare but hugely significant ‘tail’ event takes place, this raises the probability of further tail events. We might call them second-order tail events; they include stock market gyrations after a big fall, and earthquake aftershocks. The initial probability of second-order tail events is so tiny it’s almost impossible to calculate – but once a first-order tail event occurs, the rules change, and the probability of a second-order tail event increases.

The dynamics of tail events are complicated by the fact they result from cascades of other unlikely events. When COVID-19 first struck, the stock market suffered stunning losses followed by an equally stunning recovery. Some of these dynamics are potentially attributable to former sports bettors, with no sports to bet on, entering the market as speculators rather than investors. The arrival of these new players might have increased inefficiencies, and allowed savvy long-term investors to gain an edge over bettors with different goals. In a different context, we might eventually see the explosive growth of Black Lives Matter protests in 2020 as an example of a third-order tail event: a ‘black swan’, precipitated by the killing of George Floyd, but primed by a virus that disproportionately affected the Black community in the United States, a recession, a lockdown and widespread frustration with a void of political leadership. The statistician and former financier Nassim Nicholas Taleb has argued that black swans can have a disproportionate role in how history plays out – perhaps in part because of their magnitude, and in part because their improbability means we are rarely prepared to handle them.

One reason a first-order tail event can induce further tail events is that it changes the perceived costs of our actions, and change the rules that we play by. This game-change is an example of another key complex systems concept: nonstationarity. A second, canonical example of nonstationarity is adaptation, as illustrated by the arms race involved in the coevolution of hosts and parasites. Like the Red Queen and Alice in Alice in Wonderland, parasite and host each has to ‘run’ faster, just to keep up with the novel solutions the other one presents as they battle it out in evolutionary time.

Learning changes an agent’s behaviour, which in turn changes the behaviour of the system. Take a firm that fudges its numbers on quarterly earnings reports, or a high-school student who spends all her time studying specifically for a college-entrance exam rather than developing the analytical skills the test is supposed to be measuring. In these examples, a metric is introduced as a proxy for ability. Individuals in the system come to associate performance on these metrics with shareholder happiness or getting into college. As this happens, the metric becomes a target to be gamed, and as such ceases to be an objective measure of what it is purporting to assess. This is known as Goodhart’s Law, summarised by the business adage: ‘The worst thing that can happen to you is to meet your targets.’

Another type of nonstationarity relates to a concept we call information flux. The system might not be changing, but the amount of information we have about it is. While learning concerns the way we use the information available, information flux relates to the quality of the data we use to learn. At the beginning of the pandemic, for example, there was a dramatic range of estimates of the asymptomatic transmission rate. This variation partly came from learning how to make a good model of the COVID-19 contagion, but it was also due to information flux caused by the fact that viruses spread, and so early on only a small number of people are infected. This makes for sparse data on the numbers of asymptomatic and symptomatic individuals, not to mention the number of people exposed. Early on, noise in the data tends to overwhelm the signal, making learning very difficult indeed.

These forms of nonstationarity mean biological and social systems will be ‘out of equilibrium’, as it’s known in the physics and complex systems literature. One of the biggest hazards of living in an out-of-equilibrium system is that even interventions informed by data and modelling can have unintended consequences. Consider government efforts to enforce social distancing to flatten the COVID-19 infection curve. Although social distancing has been crucial in slowing the infection rate and helping to avoid overwhelming hospitals, the strategy has created a slew of second- and third-order biological, sociological and economic effects. Among them are massive unemployment, lost profit, market instability, mental health issues, increase in domestic violence, social shaming, neglect of other urgent problems such as climate change and, perhaps most importantly, second-order interventions such as the reserve banks injecting liquidity into the markets, governments passing massive stimulus bills to shore up economies, and possible changes to privacy laws to accommodate the need to enforce social distancing and perform contact-tracing.

Do the properties of complex systems mean prediction and control are hopeless enterprises? They certainly make prediction hard, and favour scenario planning for multiple eventualities instead of forecasting the most likely ones. But an inability to predict the future doesn’t preclude the possibility of security and quality of life. Nature, after all, is full of collective, coupled systems with the same properties of nonlinearity and nonstationarity. We should therefore look to the way biological systems cope, adapt and even thrive under such conditions.

Before we turn to nature, a few remarks about human engineering. Our species has been attempting to engineer social and ecological outcomes since the onset of cultural history. That can work well when the engineering is iterative, ‘bottom up’ and takes place over a long time. But many such interventions have been impotent or, worse, disastrous, as discussed by the anthropologist Steve Lansing in his book Priests and Programmers: Technologies of Power in the Engineered Landscape of Bali (2007). In one section, Lansing compares the effective, 1,000-year-old local water distribution system in Bali with the one imposed by central government engineers during the 20th-century green revolution. This top-down approach disrupted the fragile island and its shoreline ecosystems, and undermined collective governance.

Fiascos happen when we use crude data to make qualitative decisions. Other reasons include facile understandings of cause and effect, and the assumption that the past contains the best information about the future. This kind of ‘backward looking’ prediction, with a narrow focus on the last ‘bad’ event, leaves us vulnerable to perceptual blindness. Take how the US responded to the terrorist attacks of 11 September 2001 by investing heavily in terrorism prevention, at the expense of other problems such as healthcare, education and global poverty. Likewise, during the COVID-19 crisis, a deluge of commentators has stressed investment in healthcare as the key issue. Healthcare is neglected and important, as the pandemic has made clear – but to put it at the centre of our efforts is to again be controlled by the past.

Fans of The Lord of the Rings might remember the character Aragorn’s plan to draw Sauron’s eye to the Black Gate, so that the protagonists Frodo and Sam could slip into Sauron’s realm via another route (the lair of a terrifying spider-like monster). The plan relied on Sauron’s fear of the past, when Aragorn’s ancestor cut the powerful ring at the centre of the story from Sauron’s finger. The point is that narrow, emotionally laden focus effectively prevents us from perceiving other problems even when they are developing right under our noses. In complex systems, it is critical to build safeguards against this tendency – which, on a light-hearted note, we name Sauron’s bias.

There are better ways to make consequential, society-wide decisions. As the mathematician John Allen Paulos remarked about complex systems: ‘Uncertainty is the only certainty there is. And knowing how to live with insecurity is the only security.’ Instead of prioritising outcomes based on the last bad thing that happened – applying laser focus to terrorism or inequality, or putting vast resources into healthcare – we might take inspiration from complex systems in nature and design processes that foster adaptability and robustness for a range of scenarios that could come to pass.

This approach has been called emergent engineering. It’s profoundly different from traditional engineering, which is dominated by forecasting, trying to control the behaviour of a system and designing it to achieve specific outcomes. By contrast, emergent engineering embraces uncertainty as a fact of life that’s potentially constructive.

When applied to society-wide challenges, emergent engineering yields a different kind of problem-solving. Under a policy of constructive uncertainty, for example, individuals might be guaranteed a high minimum quality of life, but wouldn’t be guaranteed social structures or institutions in any particular form. Instead, economic, social and other systems would be designed so that they can switch states fluidly, as context demands. This would require a careful balancing act between questions of what’s good and right on the one hand – fairness, equality, equal opportunity – and a commitment to robustness and adaptability on the other. It is a provocative proposal, and experimenting with it, even on a relatively small scale as in healthcare or financial market design, will require wading through a quagmire of philosophical, ethical and technical issues. Yet nature’s success suggests it has potential.

The human heart confers robustness by beating to a rhythm that’s neither chaotic nor periodic but fractal

Consider that the human body is remarkably functional given all that could go wrong with its approximately 30 trillion cells (and 38 trillion bacterial cells in the body’s microbiome). Nature keeps things working with two broad classes of strategy. The first ensures that a system will continue to function in the face of disturbances or ‘perturbations’; the second enables a system to reduce uncertainty but allow for change, by letting processes proceed at different timescales.

The first strategy relies on what are known as robustness mechanisms. They allow systems to continue to operate smoothly even when perturbations damage key components. For example, gene expression patterns are said to be robust if they do not vary in the face of environmental or genetic perturbations such as mutations. There are many mechanisms that make this invariance possible, and much debate about how they work, but we can simplify here to give the basic idea. One example is shadow enhancers: partially redundant DNA sequences that regulate genes and work together to keep gene expression stable when a mutation occurs. Another example is gene duplication in which genes have a backup copy with partial functional overlap. This redundancy can allow the duplicate to compensate if the original gene is damaged.

Robustness mechanisms can be challenging to build in both natural and engineered systems, because their utility isn’t obvious until something goes wrong. They require anticipating the character of rare but damaging perturbations. Nature nonetheless has discovered a rich repertoire of robustness mechanisms. Reconciliation – making up after fights and restoring relationships to a preconflict baseline – isn’t just a human invention. It’s common throughout the animal kingdom and has been observed in many different species. In a different context, the complex structure of the human heart is thought to confer robustness to perturbations at a wide range of scales by beating to a rhythm that is neither chaotic nor periodic but has a fractal structure. Robust design, in contrast to typical approaches in engineering, focuses on discovering mechanisms that maintain functionality under changing or uncertain environments.

Nature has another set of tricks up her sleeve. The timescales on which a system’s processes run have critical consequences for its ability to predict and adapt to the future. Prediction is easier when things change slowly – but if things change too slowly, it becomes hard to innovate and respond to change. To solve this paradox, nature builds systems that operate on multiple timescales. Genes change relatively slowly but gene expression is fast. The outcomes of fights in a monkey group change daily but their power structure takes months or years to change. Fast timescales – monkey fights – have more uncertainty, and consequently provide a mechanism for social mobility. Meanwhile, slow timescales – power structures – provide consistency and predictability, allowing individuals to figure out the regularities and develop appropriate strategies.

The degree of timescale separation between fast and slow dynamics matters too. If there’s a big separation and the power structure changes very slowly, no amount of fight-winning will get a young monkey to the top – even if that monkey, as it gained experience, became a really gifted fighter. A big separation means it will take a long time for ‘real’ information at the individual level – eg, that the young monkey has become a good fighter – to be reflected in the power structure. Hence, if the power structure changes too slowly, although it might guard against meaningless changes at the individual level, it won’t be informative about regularities – about who can actually successfully use force when things, such as the ability of our young monkey, really do change.

Furthermore, sometimes the environment requires the system as a whole to innovate, but sometimes it demands quiescence. That means there’s a benefit to being able to adjust the degree of timescale separation between the fast and slow processes, depending on whether it’s useful for a change at the ‘bottom’ to be felt at the ‘top’. These points circle us back to our earlier remarks about nonstationarity – the degree of timescale separation is a way of balancing trade-offs caused by different types of nonstationarity in the system.

The detailed mechanisms by which nature accomplishes timescale separation are still largely unknown and an active area of scientific investigation. However, humans can still take inspiration from the timescale-separation idea. When we design systems of the future, we could build in mechanisms that enable users – such as market engineers and policymakers – to tune the degree of timescale separation or coupling between individual behaviour on the one hand, and institutions or aggregate variables such as stock returns or time in elected office on the other. We have crude versions of this already. Financial markets are vulnerable to crashes because of an inherent lack of timescale separation between trading and stock market indices, such that it’s possible in periods of panic-selling for an index to lose substantial value in a matter of hours. In recognition of this property, market engineers introduced what’s called a ‘circuit breaker’ – a rule for pausing trading when signs of a massive drop are detected. The circuit breaker doesn’t really tune the separation between trades and index performance, though. It simply halts trading when a crash seems likely. A more explicit tuning approach would be to slow down trading during dangerous periods by limiting the magnitude or frequency of trades in a given window, and to allow trading to proceed at will when the environment is more predictable. There are many possible alternative tuning mechanisms; which is best suited to markets is ultimately an empirical question.

Stock market crashes are a bridge to another of nature’s fascinating properties: the presence of tipping points or critical points, as they’re called in physics. When a system ‘sits’ near a critical point, a small shock can cause a big shift. Sometimes, this means a shift into a new state – a group of fish shoaling (weakly aligned) detects a shark (the shock) and switches to a school formation (highly aligned), which is good for speedy swimming and confusing the predator. These tipping points are often presented in popular articles as something to avoid, for example, when it comes to climate change. But, in fact, as the shark example illustrates, sitting near a critical point can allow a system to adapt appropriately if the environment changes.

As with timescale separation, tipping points can be useful design features – if distance from them can be modulated. For example, in a recent study of a large, captive monkey society it was found the social system was near a critical point such that a small rise in agitation – perhaps caused by a hot afternoon – could set off a cascade of aggression that would nudge the group from a peaceful state into one in which everyone is fighting. In this group there happened to be powerful individuals who policed conflict, breaking up fights impartially. By increasing or decreasing their frequency of intervention, these individuals could be tuning the group’s sensitivity to perturbations – how far the aggression cascades travel – and thereby tuning distance from the critical point.

We still don’t know how widespread this sort of tuning is in biological systems. But like degree of timescale separation, it’s something we can build into human systems to make them more fluid and adaptive – and therefore better able to respond to volatility and shocks. In the case of healthcare, that might mean having the financial and technological capacity to build and dismantle temporary treatment facilities at a moment’s notice, perhaps using 3D-printed equipment and biodegradable or reusable materials. In the economy, market corrections that burst bubbles before they get too large serve this function to some extent – they dissipate energy that has built up within the system, but keep the cascade small enough that the market isn’t forced into a crash.

We are not perfect information processors. We make mistakes. The same is true of markets

Climate-change activists warning about tipping points are right to worry. Problems arise when the distance from the critical point can’t be tuned, when individuals make errors (such as incorrectly thinking a shark is present), and when there’s no resilience in the system – that is, no way back to an adaptive state after a system has been disturbed. Irreversible perturbations can lead to complete reconfigurations or total system failure. Reconfiguration might be necessary if the environment has changed, but it will likely involve a costly transition, in that the system will need time and resources to find satisfactory solutions to the new environment. When the world is moderately or very noisy – filled with random, uninformative events – sensitivity to perturbations is dangerous. But it’s useful when a strategic shift is warranted (eg, a predator appears) or when the environment is fundamentally changing and the old tactics simply won’t do.

One of the many challenges in designing systems that flourish under uncertainty is how to improve the quality of information available in the system. We are not perfect information processors. We make mistakes and have a partial, incomplete view of the world. The same is true of markets, as the investor Bill Miller has pointed out. This lack of individual omniscience can have positive and negative effects. From the system’s point of view, many windows on the world affords multiple independent (or semi-independent) assessments of the environment that provide a form of ‘collective intelligence’. However, each individual would also like a complete view, and so is motivated to copy, share and steal information from others. Copying and observation can facilitate individual learning, but at the same time tends to reduce the independence and diversity that’s valuable for the group as a whole. A commonly cited example is the so-called herd mentality of traders who, in seeing others sell, panic and sell their own shares.

For emergent engineering to succeed, we need to develop a better understanding of what makes a group intelligent. What we do know is there seem to be two phases or parts of the process – the accumulation phase, in which individuals collect information about how the world works, and the aggregation phase, in which that information is pooled. We also know that if individuals are bad at collecting good information – if they misinterpret data due to their own biases or are overconfident in their assessments – an aggregation mechanism can compensate.

One example of an aggregation mechanism is the PageRank algorithm used early on in Google searches. PageRank worked by giving more weight to those pages that have many incoming connections from other webpages. Another kind of aggregation mechanism might discount votes of individuals who are prone to come to the same conclusion because they use the same reasoning process, thereby undermining diversity. Or take the US electoral college, which was originally conceived to ‘correct’ the popular vote so that population-dense areas didn’t entirely control election outcomes. If, on the other hand, implementing or identifying good aggregation mechanisms is hard – there are, for example, many good arguments against the electoral college – it might be possible to compensate by investing in improving the information-accumulation capacity of individuals. That way, common cognitive biases such as overconfidence, anchoring and loss-aversion are less likely at first instance. That said, in thinking through how to design aggregation algorithms that optimise for collective intelligence, ethical issues concerning privacy and fairness also present themselves.

Rather than attempt to precisely predict the future, we have tried to make the case for designing systems that favour robustness and adaptability – systems that can be creative and responsive when faced with an array of possible scenarios. The COVID-19 pandemic provides an unprecedented opportunity to begin to think through how we might harness collective behaviour and uncertainty to shape a better future for us all. The most important term in this essay is not ‘chaotic’, ‘complex’, ‘black swan’, ‘nonequilibrium’ or ‘second-order effect’. It’s: ‘dawn’.

Published in association with the Santa Fe Institute, an Aeon Strategic Partner.