Many of us will recall Petri dishes from our first biology class – those shallow glass vessels containing a nutrient gel into which a microbe sample is injected. In this sea of nutrients, the cells grow and multiply, allowing the colony to flourish, its cells dividing again and again. But just as interesting is how these cells die. Cell death in a colony occurs in two ways, essentially. One is through an active process of programmed elimination; in this so-called ‘apoptotic’ death, cells die across the colony, ‘sacrificing’ themselves in an apparent attempt to keep the colony going. Though the mechanisms underlying apoptotic death are not well understood, it’s clear that some cells benefit from the local nutrient deposits of dying cells in their midst, while others seek nutrition at the colony’s edges. The other kind of colony cell death is the result of nutrient depletion – a death induced by the impact of decreased resources on the structure of the waning colony.
Both kinds of cell death have social parallels in the human world, but the second type is less often studied, because any colony’s focus is on sustainable development; and because a colony is disarmed in a crisis by suddenly having to focus on hoarding resources. At such times, the cells in a colony huddle together at the centre to preserve energy (they even develop protective spores to conserve heat). While individual cells at the centre slow down, become less mobile and eventually die – not from any outside threat, but from their own dynamic decline – life at the edges of such colonies remains, by contrast, dynamic. Are such peripheral cells seeking nourishment, or perhaps, in desperation, an alternative means to live?
But how far can we really push this metaphor: are human societies the same? As they age under confinement, do they become less resilient? Do they slow down as resources dwindle, and develop their own kinds of protective ‘spores’? And do these patterns of dying occur because we’ve built our social networks – like cells growing together with sufficient nutrients – on the naive notion that resources are guaranteed and infinite? Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice.
Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’.
Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists.
In recent years, there’s been growing attention to what drives group size – and what the implications are for how we build alliances, how we see ourselves and others, and who ‘belongs’ and who doesn’t. Of course, with the advent of social media, our understanding of what a group is has fundamentally changed.
The British anthropologist Robin Dunbar popularised the question of group size in his book How Many Friends Does One Person Need? (2010). In that study, he took on the challenge of relating the question of group size to our understanding of social relationships. His interest was based on his early studies of group behaviour in animal primates, and his comparison of group sizes among tribal clans. Dunbar realised that, in groups of more than 150 people, clans tend to split. Averaging sizes of some 20 clan groups, he arrived at 153 members as their generalised limit.
However, as we all know, ‘sympathy groups’ (those built on meaningful relationships and emotional connections) are much smaller. Studies of grieving, for example, show that our number of deep relationships (as measured by extended grieving following the death of a sympathy group member) reach their upward limit at around 15 people, though others see that number as even smaller at 10, while others, still, focus on close support groups that average around five people.
For Dunbar, 150 is the optimal size of a personal network (even if Facebook thinks we have more like 500 ‘friends’), while management specialists think that this number represents the higher limits of cooperation. In tribal contexts, where agrarian or hunting skills might be distributed across a small population, the limiting number is taken to indicate the point after which hierarchy and specialisation emerge. Indeed, military units, small egalitarian companies and innovative think-tanks seem to top out somewhere between 150 and 200 people, depending on the strength of shared conventional understandings.
Though it’s tempting to think that 150 represents both the limits of what our brains can accommodate in assuring common purpose, and the place where complexity emerges, the truth is different; for the actual size of a group successfully working together is, it turns out, less important than our being aware of what those around us are doing. In other words, 150 might be an artefact of social agreement and trust, rather than a biologically determined structural management goal, as Dunbar and so many others think. We know this because it’s the limit after which hierarchy develops in already well-ordered contexts. But we also know this because of the way that group size shrinks radically in the absence of social trust. When people aren’t confident about what proximate others are mutually engaged in, the relevant question quickly turns from numbers of people in a functioning network to numbers of potential relationships in a group. So, while 153 people might constitute a maximum ideal clan size, based on brain capacity, 153 relationships exist in a much smaller group – in fact, 153 relationships exist exactly among only 18 people.
Smaller college size facilitates growing trust among strangers, making for better educational experiences
Dunbar’s number should actually be 18, since, under stress, the quality of your relationships matters much more than the number of people in your network. The real question is not how many friends a person can have, but how many people with unknown ideas can be put together and manage themselves in creating a common purpose, bolstered by social rules or cultures of practice (such as the need to live or work together). Once considered this way, anyone can understand why certain small elite groups devoted to creative thinking are sized so similarly.
Take small North American colleges. Increasingly, they vie with big-name universities such as Harvard and Stanford not only because they’re considered safer environments by worried parents, but because their smaller size facilitates growing trust among strangers, making for better educational experiences. Their smaller size matters. Plus, it’s no accident that the best of these colleges on average have about 150 teaching staff (Dunbar’s number) and that (as any teacher will know) a seminar in which you expect everyone to talk tops out at around 18 people.
But what do we learn from these facts? Well, we can learn quite a bit. While charismatic speakers can wow a crowd, even the most gifted seminar leader will tell you that his or her ability to involve everyone starts to come undone as you approach 20 people. And if any of those people require special attention (or can’t tolerate ideological uncertainty) that number will quickly shrink.
In the end, therefore, what matters much more than group size is social integration and social trust. As for Facebook’s or Dunbar’s question of how many ‘friends’ we can manage, the real question ought to be: how healthy is the Petri dish? To determine this, we need to assess not how strong are the dish’s bastions (an indicator of what it fears) but its ability, as with the small North American college, to engage productively and creatively in extroverted risk. And that’s a question that some other cultures have embraced much better than even North American colleges.
On the Indonesian island of Bali, a village isn’t a community unless it has three temples: one for the dead ancestors and things past (pura dalem); a community temple that manages social life (pura desa); and a temple of origin (pura puseh). This last temple is what literally ties an individual self to a particular place. For the word puseh means ‘navel’.
To this last temple every Balinese is connected by a spiritual umbilicus, and every 210 days (that is, every Balinese year) a person thus tied is obliged to return physically to honour that connectedness, becoming again a metaphorical stem cell: returning to their place of origin, examining their patterns of growth, and using their ‘stem’ in the interests of restructuring a healthier future. The stem cell, of course, is the recursive place where embryologists gather cells to regrow us more healthily; and, in Bali, extroversion is health-enhancing only once we bring back what we learn to where we began. Neglecting this originary connection can cause grave harm, and being far removed, or abroad for an extended period, risks snapping that cord if stretched too far, severing the very lifeline to one’s own past, present and future.
But why stretch your umbilicus at all if potential outcomes might be dire? Because boundary exploration helps us define who we are; because the unfamiliar makes us conscious of what’s central; because we need to approach things that are unusual if we’re to diversify and grow. It’s the idea behind the avant-garde (literally, the advance guard) – the original French term referred to a small group of soldiers dispatched to explore the terrain ahead so as to test the enemy. You could stay put and remain ignorant, or go too far and get killed. Alternatively, you might go just far enough to learn something and come back to describe what you’d witnessed. It’s a simple idea, part of every vision quest, and filled with deep uncertainty.
Indeed, the very uncertainty of exploration is critical to adaptation and growth. Our shared values (the ‘cultures’ we think we know at the centre of the Petri dish) are always explicitly defined at the peripheries, where we become more aware of our assumptions. And if there’s no wall or Petri dish to contain us, we need to have that umbilicus: because we need a device to measure how far is too far. This being the case, it follows that curiosity is critical to rethinking what we take for granted. It can make us better informed, but it can also get us into trouble. When will the umbilicus snap? How far is too far? These are good questions that once again might be illuminated by a biological example. The human immune system is the best one I know.
For a long time, science told us that immunity was about defending ourselves from foreign invaders. This model explains the way we resist becoming host to lots of foreign things that could destroy us – it’s how the body resists becoming a toxic dump site. It also animates the way we teach schoolchildren about washing hands and, today, donning masks and remaining socially distant.
Viruses are not living invaders. They’re just information that can sit around like books in our genetic library
Setting aside its inherent xenophobia (keep out all things foreign), the defence model works well enough. But there’s a big problem with this simple idea: because we need knowledge of the foreign landscape and its inhabitants in order to adapt. Indeed, we build immunity on the back of dendritic (presentation) cells that, like the military advance guard, bring back to our bodies specific information that we assess and respond to.
While it’s true that, in this sense, we’re reacting ‘defensively’ when we adapt, that’s pretty much where the utility of the military metaphor ends – and where modern immunity begins to challenge what immunologists have defined for decades as the ‘recognition and elimination of nonself’. The metaphor fails because viruses are not living invaders. They are just information that can sit around like books in our genetic library until someone reads them, revising what they mean through some editorial updating, and then bringing the information they offer to life once again, in a new form.
Moreover, like books in a lending library, some viruses remain unread, while others are widely used. Some are dusty, some dog-eared. That’s because viruses proliferate only when people congregate in reading groups and animate them; where what those groups attend to is socially, not biologically, driven. Like those books, viruses are just bits of data that our bodies interpret and share with others, for better or worse. This is a process that happens every day, and mostly for the better, especially when viral intelligence helps us to adapt, and prevents us (like isolated tribes) from dying of the common cold every time cruise ships or truckers from abroad show up at our ferries and ports.
But there’s another reason that invasive images fail to explain the science. In 1994, the immunologist Polly Matzinger introduced an immune system model in which our antibodies don’t respond solely as a matter of defence. They respond, in her view, because antigen-presenting (dendritic) cells stimulate immunologic responses. Although the immune system remains defensive in this view, Matzinger’s argument shifted the debate ever so slightly from levels of self-preservation to information-presentation – from excluding outsiders to understanding them.
The idea was radical in immunologic science, but mundane in anthropology. Countless anthropological arguments saying much the same thing about self, awareness of ‘the other’ had been around for more than a century (and obvious to other cultures for millennia), but the assault on self-preservation through extroverted risk finally entered bench science with Matzinger, appearing not only as ‘new’, but in a form familiar enough to bench scientists to sound plausible.
The immune system is your biological intelligence. It needs the ‘infection’ of foreign bodies to help you survive
Now, if belatedly, immunology was poised to question both Darwinian preservation and selfishness in one go, as well as its own otherwise unexamined assumptions about the social and biological exclusion of ‘nonself’. Matzinger’s idea got traction, its shift from defence to curiosity calling attention to the immune system’s role in assessing the unknown (as opposed to shunning the outside).
Still, the argument would in any case be revised by three key realities. The first, which didn’t take root among theoretical immunologists until regenerative medicine emerged at the end of the 1990s, is that viruses are less invaders than informants. I’d picked up this idea from the Balinese whom I worked with during the AIDS crisis in the 1980s. But it wasn’t limited to them. Other, less ‘Cartesian’ Indigenous groups, such as the Navajo, share this understanding. The second truth, which came from the same cross-cultural experience, was that immunology was stuck in self-interest: it couldn’t fathom why a self would reach out in an extroverted and potentially dangerous manner instead of only selfishly defending its identity.
Scientists were slowly awakening to a fact well known in many non-Darwinian settings: namely, that externality (extroversion) matters. So does reciprocity – as anthropologists well know. External information has to resonate with ‘self’ – in this case, with cells that your body already makes – in order to bind, transcribe and replicate. That’s the key function of our immune cells, which are made mostly in the thymus (T cells) and bone marrow (B cells). Our bodies make millions of novel cells in these mutation factories, so many in fact that we can’t even count them. Like experimental radio beams sent into outer space, these cells send out signals, functioning as much as search engines as systems of defence.
The point here is that thinking of the immune system only as a defensive fortress-builder seriously misses what it’s actually doing. Because the immune system is also, and quite literally, your biological intelligence. It needs the ‘infection’ of foreign bodies to help you develop and survive. This same need also explains how vaccines protect us from biological meltdown. Extroversion is therefore not only needed as a defence strategy, as Matzinger would have it, but as a means of engaging with and also creating environmental adaptations, even if these encounters prove life-threatening for some. We see this need manifest itself graphically in the present COVID-19 crisis – less by what is happening scientifically, than by what is happening socially.
A recent report on wellbeing and mental health by the Brookings Institution attempts to deconstruct the apparent paradox of reported feelings of hope among otherwise disadvantaged and openly disenfranchised populations in the United States during the pandemic. ‘Predominantly Black counties have COVID-19 infection rates that are nearly three times higher than that of predominantly white counties,’ the report says, ‘and are 3.5 times more likely to die from the disease compared to white populations.’ Yet those same communities also express much higher levels of optimism and hope.
The authors list various potential explanations for these higher rates of infection and death: ‘overrepresentation in “essential” jobs in the health sector and in transportation sectors where social distancing is impossible’; ‘underrepresentation in access to good health care, and their higher probability of being poor’; ‘longer-term systemic barriers in housing, opportunity, and other realms’; and being ‘more likely to have pre-existing health conditions [risk factors] such as asthma, diabetes, and cardiovascular diseases’.
Given such disadvantage, and the inability to practise social distancing, the authors understandably presume that these socially disadvantaged groups should ‘demonstrate the highest losses in terms of mental health and other dimensions of wellbeing’. However, what they discovered is the exact opposite. Not only do African Americans remain the most optimistic of all the cohorts studied, when data is controlled for race and income, they also report ‘better mental health than whites, with the most significant differences between low-income Blacks and whites’. Indeed, low-income African Americans are 50 per cent less likely to report experiencing stress than low-income whites, and (along with Hispanics) are far less likely to involve themselves in deaths born of despair than whites.
There are, of course, many complex reasons involved, including such things as community resilience and extended family ties, a belief in the merits of higher education, and a history of overcoming social inequality – some of which (like the merits of education) have declined among low-income whites. According to the authors of the Brookings Institution report, ‘the same traits that drive minority resilience in general are also protective of wellbeing and mental health in the context of the pandemic’.
Now, these factors fit well with the literature on so-called ‘post-traumatic growth’ (where overcoming threatening hurdles can be strengthening). They also conform with what has been written about ‘resilient kids’ – those children who make good on challenging backgrounds to become considerate and sometimes successful human beings. Such findings, though, can be dangerous if the only take-home message is that adversity produces resilience. Herbert Spencer, the 19th-century father of Social Darwinism, who believed that stress was strengthening, and that charity only delayed what biology, in eliminating the weak, would take care of on its own. For Spencer, stress defined resilience.
Every time we look one another in the eye and nod affirmatively, we create an informal contract
And that’s the problem. Because the simple act of translating a biological story into a social one exposes a critical fallacy in the biology itself – this being that our otherwise inert genes possess the animated capacity for ‘selfishness’, even though they’re just bits of inert information to which our cells clearly bring life. Here, the supposedly scientific argument about determinism emerges as animated fantasy – a tendentious fundamentalism bordering on religious fundamentalism; or a moral lesson, as E O Wilson thought of sociobiology, in which stress emerges as morally and allegorically conditional. The only problem is, well, that’s just not what’s happening.
Stress, to be clear, is neither good nor bad. It is amoral – or rather, its moral content is something we make together – socially, not biologically. For social engagement is itself a form of extroversion – an act of accommodation, a belief in the value of difference – in short, an anti-fundamentalist, anti-determinist view of the merits of navigating uncertainty together. But resilience can look Darwinian – both because the disadvantaged African Americans who respond to Brookings Institution surveys have already transcended significant challenges; and because the uneven playing field on which they’ve lived has long since silenced, ruined or completely destroyed those lacking survival networks. Such a story might even be corroborated by the unhappy fact that African Americans (and men in particular) live less long than their counterparts in other groups; and, when they do live longer, they’re more likely to spend time in prison if what stress teaches them is antisocial.
Research on minority resilience must, therefore, be read differently. For it is social exchange – our very sociality, the ‘moral economy’ – that produces hope. Here, everything depends on social context. So, those who engage and exchange socially (by choice with families, or by default or of necessity in healthcare and service jobs) are better equipped to deal with the uncertainty of COVID-19 – and remain hopeful. It’s the engagement part – by choice or necessity – that nourishes hope. Every time we look one another in the eye and nod affirmatively in a social setting, we create an informal contract with another person. Dozens, sometimes hundreds, of times a day, we affirm our trust in others by this simple act, masked or not. We do this as an act of extroversion, hoping that we can survive and grow through creative engagement with what we learn on the edges of our community, and, if not, that our resilience can be nourished by those with whom we share common purpose.
Black people in America might die more than three times more often than whites in the pandemic, but they’re also less socially isolated via their higher representation in public-facing jobs in which they have to engage with others. Like the military advance guard, or those cells at the edge of the Petri dish colony, they’re more likely to learn more from extroverted risk, and to adjust their expectations accordingly, emerging as more resilient in themselves and less vulnerable to mistrusting others. That’s not only why deaths born of despair are less common among them, but why isolation itself is a major driver of COVID-19 fatigue for all of us.
It’s the engagement that matters. The so-called ‘healthy migrant effect’ offers a clear example. Migrant struggles are well documented, but migrants who enter into new communities often have just as good or even better health statuses than native populations. Thus, second-generation Asian-American migrants are more likely to excel in secondary school, and have much higher test scores, attend elite colleges and receive high-income professional degrees (eg, business, medicine, etc). The point is that it’s not only the extroverted risk of migrating that matters: it’s whether that risk results in a sense of meaningful exchange within a social context. It’s exchange itself, it turns out, that’s important.
What’s more, the more moral its content, the better the odds that such exchange will enhance resilience. Most of the time, risks don’t work out as expected. And when they don’t work out, we all need a parent’s couch to sleep on and a shared meal to increase our sense of belonging and hope. It’s what the French sociologist Marcel Mauss observed almost a century ago about the value of reciprocity in his essay The Gift (1925): that the giver gives a part of him or herself, and that the thing given implies a return. Which is to say that it’s the exchange relationship that makes an economy ‘moral’ in the first place.
By contrast, being alone undermines wellbeing. We know this from studying the impact of social isolation on mortality and morbidity. There’s lots of evidence here, and not just from studies of suicide: experiencing social isolation is a key reason why children who are wards of state, for example, often elect to return to families that are dangerous for them. In fact, being socially engaged even trumps being equal to others when it comes to what we all need.
Again, evidence falls readily to hand. Some recent work on isolation and healthcare in China, carried out by members of the Cities Changing Diabetes global academic network that I lead, shows just how much of a risk factor social isolation is. Asked if equality of access to healthcare contributed directly to an inability to manage disease, about one-third of the several hundred people we interviewed said ‘yes, equality matters’. Asked how much the absence of family networks (a proxy for social isolation) impacted illness experience, and the percentage who said it did rose to almost everyone (93 per cent). And that’s in a country known to provide next-to-no care, let alone equal care, for economic migrants who must go home to be treated. This finding is startling, because equality is the gold standard for engagement in any democracy. Yet even it fades in importance when the moral economy is measured.
For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense
The same holds true of refugees from violence. In another project (one in which I’ve been personally involved), funded by the University of Applied Sciences in Bochum, Germany, we systematically documented the health vulnerabilities of recent migrants. Asked whether they were receiving good healthcare, Syrian refugees resettled in communities often answered that they were receiving excellent care, even though German-born citizens publicly stated that those migrants were getting less. That’s not just because welfare in Germany looks pretty good when compared with Aleppo. It’s because extroverted hope, when paired with the altruism it generates socially, mediates a person’s ability to believe in the future – even if that hoped-for future is still somewhere far in the distance.
There’s an important conclusion here: equality is only a first step towards alleviating human suffering and promoting feeling well within a moral economy. The bigger part concerns how people learn to hope about more than getting through the day. To put it another way, being hopeful requires a belief in the future, a long-term view.
But being hopeful also requires more than that. It requires a sense of deep time and an enduring willingness – a desire – to engage. For hope to proliferate, we need much more than endurance in the heroic, Darwinian sense. We need a willingness to accept the natural place of everyday uncertainty, and we need diversity – even redundancy – to make that possible. The idea isn’t hard to grasp. The American inventor Thomas Edison once said that, in order to create, inventors need ‘a good imagination and a pile of junk’. The implication is that the hope required to convert junk into something useful sustains your extended contemplation of a pile of rubbish (what looks irrelevant now) over the deep time required to reshape it. But there’s another lesson: if you eliminate (recycle) what in the moment seems redundant or useless, without giving it a fair chance at invention, you also eliminate the possibility of making something new. Growth depends on merging two unlike things in the interest of making something greater.
Redundancy and diversity form the basis of every moral economy, which is why neoliberal economies – those that take what look like redundancies and eliminate them in the interest of ‘efficiency’ – fail miserably in assisting population wellbeing. I have yet to see, for example, how profit manages itself in places where state welfare is almost entirely absent (eg, Nigeria). Neoliberalism succeeds only when it emerges within otherwise generous societies that have welfare stockpiles that can be selfishly mined. On that point, Ayn-Rand-style economics fails, and will forever fail, by favouring self-interest and efficiency over diversity, generosity and altruism. Observe what short-term self-interest has done to challenged economies, and a picture of what my fellow anthropologist Jonathan Benthall in 1991 called ‘market fundamentalism’ is easily painted.
The social parallels here almost need no stating: what seems irrelevant to any one of us today, including the peculiar views of others, might in the end provide the very thing necessary to make us resilient to a future challenge – just as hope in the future mediates the uncertainties of COVID-19 through social engagement.
To read more about public health and wellbeing, visit Psyche, a digital magazine from Aeon that illuminates the human condition through psychology, philosophical understanding and the arts.