Menu
Aeon
DonateNewsletter
SIGN IN

An aerial view shows a typically busy Wuhan, in China’s central Hubei province, deserted amid the deadly coronavirus outbreak that originated in the city. 27 January 2020. Photo by Hector Retamel/AFP/Getty

i

Collaborators in creation

Our world is a system, in which physical and social technologies co-evolve. How can we shape a process we don’t control?

by Doyne Farmer, Fotini Markopoulou, Eric Beinhocker & Steen Rasmussen + BIO

An aerial view shows a typically busy Wuhan, in China’s central Hubei province, deserted amid the deadly coronavirus outbreak that originated in the city. 27 January 2020. Photo by Hector Retamel/AFP/Getty

This is a disorienting time. Disagreements are deep, factions stubborn, the common reality crumbling. Technology is changing who we are and the society we live in at a blinding pace. How can we make sense out of these changes? How can we forge new tools to guide our future? What is our new identity in this changing world?

Social upheavals caused by new technologies have occurred throughout history. When the Spaniards arrived in the New World in 1492, some of their horses escaped and headed north. In the Great Plains, Native Americans began to domesticate the expanding horse population. Horses made it easier to hunt buffalo, and hunting required a new way of life. Before horses, the Cheyenne farmed and lived in earth lodges in large, permanent villages organised into matrilineal clans. After they adopted horses, they became nomadic hunters, living in small, isolated family groups in the winter and huge encampments in the summer. The shift to summer encampments led to police societies, groups of unrelated men who kept order, and enormous Sun Dance festivals. Older forms of social order withered away. The culture of the Great Plains tribes also changed. Horses are easier to steal than land or grain, and so the tribal culture became warlike and developed a culture of honour that valued physical toughness and bravery.

The domesticated horse is a technology, just like engines, trains and cars. This is a common story: who we are and how we live drives the technologies we develop, and our technologies change who we are and how we live. Opposable thumbs originally evolved for gripping tree branches but were also useful in making tools, and tool use then changed the shape of our thumbs, enabling more nimble hands to make new and better tools. Socially evolved brains enabled that knowledge to be transmitted and built upon, and our tools and knowledge enabled new ways of living – and so on, over a multimillion-year journey. This co-evolutionary dance between technologies, physiology, culture and institutions has been going on for millennia.

Cultural institutions are also a kind of technology – a social technology. Just as physical technologies – agriculture, the wheel or computers – are tools for transforming matter, energy or information in pursuit of our goals, social technologies are tools for organising people in pursuit of our goals. Laws, moral values and money are social technologies, as are ways of organising an army, a religion, a government or a retail business.

While we are fascinated and sometimes frightened by the pace of evolution of physical technologies, we experience the evolution of social technologies differently. Our values, laws and political organisations define and shape our identities. We often regard those who use different social technologies – people from different cultures, regions, nations, religions or those with different values and beliefs – as ‘others’. When social technologies change too quickly, we experience a loss of identity, a collective confusion about who we are and how we distinguish ourselves from others. But when social technologies change too slowly, this can create tensions too – for example, when political institutions fail to keep pace with wider changes in society.

Physical and social technologies co-evolve all the time, pushing and pulling on each other. The influence is in both directions. Physical and social technologies are so entangled that it can be hard to separate them.

What drives technological change? In many popular narratives, invention is an act performed by heroes such as Thomas Edison and Tim Berners-Lee. In reality, technological change comes about through an incremental process that involves a great deal of trial and error, and networks of people working in ecosystems of innovation. Technological change is an evolutionary process, very much like biological change is an evolutionary process.

Both physical and social technologies follow evolutionary processes and co-evolve together as physical change spurs social change, which spurs physical change, and so on in an infinite loop. Pierre Teilhard de Chardin (1881-1955), a French philosopher and Jesuit priest, was an early visionary who realised that the theory of evolution applies equally well to physical and social technologies. Writing in the 1930s, he saw the future as shaped by the evolution of three interacting spheres: the biosphere, the physical technology sphere, and the social technology sphere. Teilhard called this combination ‘the noosphere’. He stressed that each of these spheres evolves under similar rules, and that in the future they will interact more deeply, so that eventually it will become difficult to separate them.

In biological evolution, genetic variation occurs through random mutations and natural selection. With physical and social technologies, variation is a product of the human imagination and the desire to improve things. We constantly tinker with our physical and social technologies, trying to make them better. When the smartphone was invented, it was a novel combination of existing computer, communications, GPS, display, sensor, software and other technologies. Likewise, when the modern joint-stock company was invented in the mid-19th century, it was a novel combination of laws, accounting conventions, property rights, business practices and so on. As in biology, the modularity of physical and social technology design means there is an infinite unfolding space of possible designs. Each generation can build on what came before. As the economist W Brian Arthur has noted, occasionally physical principles are discovered or harnessed (eg, the use of fire, electricity, the laser) that introduce new modules or building blocks, opening up new spaces of possibility. A similar phenomenon occurs in social technologies. Many social technology innovations find new ways to harness regularities in human behaviour; for example, markets harness self-interest, political parties harness the desire for identity and for communities of shared values.

We can begin to manipulate our own genomes as we are already manipulating the genomes of other species

Even though variation of physical and social technologies is intentional, and thus directed by human agency (unlike the random variation of biology), it is nonetheless Darwinian. It has to be. That’s because we can’t know which physical and social technologies will succeed in the future when we create them. Imagine you see a friend chipping away at some stone, making an arrowhead. You look at it and say: ‘I can do better.’ Perhaps by making it a different shape, pointier, sharper, bigger, smaller or using a different stone. You then make your variant. You have a hypothesis about arrowhead improvement, but until you actually try hunting with it, you can’t really know if it’s better than your friend’s design. It will likely take a whole population of hunters over a long period of time to determine that, because the answer to ‘What is a better arrowhead?’ depends on too many variables – aerodynamics, the game being hunted, arrow and bow design, the hunter’s skills – to be figured out purely in anyone’s head. There is no ‘optimal’ arrowhead design, there are merely better and worse ones given the environment, materials and technologies available at the time. All of those variables are changing over time, and the only feasible course is to make variants and then try them out, and let selection from evolutionary competition take its course. If one tribe is more successful with a particular arrowhead design, that design can spread as the tribe increases its population, as other tribes see that success and imitate it, as that tribe conquers others in war, or as successful arrowheads get adopted through trade.

The same process of goal-directed variation, feedback from the environment, selection and replication happens with social technologies. And as the example of the Great Plains tribes showed, physical and social technologies co-evolve together. The history of both physical and social technologies is thus a history of intentional (and sometimes accidental) variation, and then an evolutionary competition that selects and replicates designs that are relatively more ‘fit’ than their competitors.

The timescale of biological evolution is much slower than of physical and social technologies – at least until now. New physical technologies make it possible to manipulate our genome, blurring the boundary between physical technologies and biology. For example, CRISPR offers the ability to directly edit the germ line of our genome, and gives us the capacity to literally change our human hardware. This would make biological variation intentional and directed in the same sense as societal and technological evolution. We can begin to manipulate our own genomes as we are already manipulating the genomes of other species (millennia of selective breeding of animals by humans has already dramatically changed the biosphere). Even without altering genes, brain implants and neurotechnology can restore missing senses and limbs, and augment brain function, enhancing who we are and how we feel and perform. Furthermore, we are on the brink of being able to create life from nonliving materials and thus enable the design and evolution of novel living technologies.

With all three branches of the noosphere evolving rapidly and in tandem, we are likely to be the first species (at least on Earth) to design our own evolutionary progeny, whether composed of silicon or carbon – or, more likely, a combination of both. The result might be frightening, but it might also be beautiful. At present, however, we have more pressing problems. For now, information technology is the principal force driving rapid evolutionary change in both our physical and social technologies.

Rapid improvements in physical technologies, leading to computing and the internet, precipitated the transition of our society into the Information Age. These changes dramatically enhance our ability to communicate, coordinate and control, which are the underpinning drivers of both our physical and social technologies. The same innovations are now blurring the boundaries between social and physical technologies, what it means to be human and what it means to be machine.

Revolutions in physical technologies involve major changes in how we use matter, energy or information. The Stone, Iron and Bronze Ages were revolutions in our use of materials. The Agricultural and Industrial Revolutions were, in essence, energy revolutions. Each of these revolutions was transformative in its own way, but information technology revolutions arguably have a more direct impact on our social technologies than revolutions in matter or energy. This is because our social orders are ultimately products of human imagination; they are ‘imagined orders’ as the historian Noah Yuval Harari calls them in Sapiens: A Brief History of Humankind (2011). Social orders are built on ideas, on knowledge and information, and depend on our abilities to store, process and transmit that information. The evolution of language itself, followed tens of thousands of years later by the development of writing, were transformative information revolutions that shaped social orders in profound and unpredictable ways. Who could have predicted that Johannes Gutenberg’s printing press would catalyse the Protestant Reformation, the invention of science, the Enlightenment, the creation of mass culture, and the development of democratic politics? Information revolutions change our cognition, our emotions and psychology, our moral values, our identities, how we interact with each other, and how we organise our societies.

Today, we are living through another major Information Revolution. Computers amplify our uniquely human abilities, making it easier to solve problems. The web and search engines extend our knowledge and memory to virtually infinite levels, and make it easy and cheap for anyone to draw on the collective knowledge of the whole world. The internet allows us to instantly communicate with almost anyone, anytime. Connected sensing devices give us eyes and ears everywhere so we become omnipresent and omniscient. Cloud computing puts vast quantities of power at everyone’s fingertips.

This Information Revolution brings with it a fundamental change to our relationship with tools. Among animals, Homo sapiens has unique capabilities to use information to build tools and alter its environment. However, our species is now losing this monopoly as AI and machine-learning algorithms are starting to act as decision-makers, for example in healthcare, job recruitment, stock trading and driving vehicles. Algorithms are the decision-makers when Facebook chooses what information to feed us, changing our opinions and shaping our governments. Robots help to perform surgery and make our automobiles. We are increasingly sharing our spot as the apex information-processing and tool-building species on our planet.

Teilhard was a religious mystic, who imagined that the noosphere is evolving toward an ‘Omega Point’, where all will be in harmony. This seems unlikely. Social and physical technologies don’t evolve with the purpose of making us happy or promoting human harmony – rather, they are selected based on the evolutionary imperative of reproduction. Darwin’s key insight was that things that are good at reproducing outcompete things that are not, and thus become dominant in a population. Technologies might not (yet) reproduce themselves – they still need our help – but the net effect is the same.

Technologies propagate themselves in any way they can. They can become widespread because they make people happy, or simply because they can (computer viruses are a simple but prevalent example). Technologies also scale by concentrating power, harnessing physical force or exploiting people’s weaknesses and addictions. Technology is value-agnostic – one can use a pencil to write a beautiful poem or to poke someone’s eye out. The unique information-processing capabilities of Homo sapiens allowed us to alter and dominate our planet to an unprecedented extent, but this power has over-run Earth’s carrying capacity, tipping us into a sixth great mass extinction event. At present, our information technologies are not saving us from that future. They are merely enabling us to reach the precipice more quickly.

There is a natural human drive to solve problems and improve our circumstances. We create new technologies to solve problems and, because human knowledge accumulates, history can be seen as an accumulation of new and better solutions to human problems over time. But solving problems doesn’t necessarily make us happier. One reason is a psychological phenomenon known as the ‘hedonic treadmill’ – initially, a new solution to a problem might make us happier, but after some time the effect wears off. If you live in a hot place and get air conditioning, it will likely make you happier at first. Then over time you get used to it, expect it, look for happiness in other things, and notice it only if it breaks. In addition, solving one problem creates other problems. Air conditioning solves your problem of being hot today but contributes to climate change, which will make you even hotter in the future.

When enabled by social media, our innate capacity for moral outrage leads to dysfunctional polarisation

New information technologies have changed society’s scale, allowing us to interact with the whole world. This has provided better solutions to numerous problems and also created a host of new problems, a dynamic that often works in unexpected ways. Smartphones are a veritable Swiss army knife of solutions to a variety of problems, from communicating to getting information, finding directions and entertaining oneself. When they were first invented, few would have predicted that they would also contribute to social isolation or undermine democracy. Social media transformed our social and political self, altered what it means to be a friend, and changed the way we elect leaders. Facebook’s mission is to ‘give people the power to build community and bring the world closer together’. It has certainly connected us, but with unintended consequences. This is no surprise – Facebook depends on revenue from advertising, which is determined by the number of users, screen engagement and access to our private data. This is what drives its algorithms – not our happiness. In fact, studies show that using Facebook has the opposite effect – it is an addictive drug that grabs our attention while making us feel inadequate. Likewise, Twitter’s mission is ‘to give everyone the power to create and share ideas and information instantly, without barriers’. But its speed also discourages deliberation and sorts people into self-reinforcing information echo chambers.

Global-scale interaction clashes with human biology. We evolved as members of small tribes, in hunter-gatherer societies of fewer than a thousand people. Our methods of communicating and self-organising into social groups evolved for interactions on this scale. We smile and respond to smiles, and we are experts at reading each other’s facial expressions. We follow social norms and, when others don’t follow those norms, we exert pressure on them to conform by being morally outraged at their transgressions. We are motivated to show our moral outrage because it signals our own virtue to the tribe, which raises our value in it. These mechanisms are beneficial for tribal social cohesion, and encourage cooperation in a small tribe. When enabled by social media that reaches billions of people, our innate capacity for moral outrage leads to dysfunctional polarisation. With the face-to-face experience removed, feeling the pain of the accused no longer checks the aggression of our outrage. The large audience of our social-media outrage amplifies our psychological motivation to signal our own moral value. The result, online moral outrage, has had negative consequences for social cohesion and politics. A biological trait that was beneficial in one situation became maladaptive in another.

Economic and social dominance enables the reproduction of technologies, and information technologies reproduce more easily and faster than previous technologies because of their dramatic increasing returns to scale. Copying a bit is nearly free, so that the costs of creating a platform for 1 billion people are not much higher than creating the same platform for 1,000 people. There are also strong network effects – who wants to be on a social media platform by themselves, or risk buying goods online with no reviews?

These effects concentrate ever more power in the hands of Information-Age titans, who are also the vectors through which new physical and social technologies propagate. The corporations of the Information Age are more powerful than the monopolies of the Industrial Age. The inherent advantages of Facebook, Google, Amazon, Tencent and Alibaba are enormous. They observe and shape our social relations and our political choices, and they control our access to information and what products we choose to buy. In the mid-19th century, Friedrich Engels observed how the Industrial Revolution dehumanised people, turning them into cogs in a vast profit-seeking machine. The Information Revolution has now turned us from cogs into bits.

Economics drives the selection process – hunting buffalo on horseback had a high payoff for the Native Americans on the Great Plains and encouraged them to rapidly adopt horses from the Spaniards. The economy is our collective metabolism. Just as the metabolism of a biological organism breaks down food to provide energy and building materials, the economy is the process through which we turn resources from our environment into goods and services that we need (or don’t). And just as the metabolism of a higher organism is a complex assembly of cells, organs and processes, the economy is a complex assembly of social technologies that help us coordinate our efforts – if we all had to act independently, we would starve.

As the world becomes more complex, we increasingly rely on science to help us make sense of it and organise our actions. Science plays an essential role in forming our worldview and informing our decision-making. We think of science as an objective process that inexorably leads to better knowledge and a deeper understanding, but science is a social technology that doesn’t always evolve in a straight line. The discipline of economics is a case in point. Financial crises, climate change and inequality all exemplify how the discipline of economics has evolved in unproductive directions and is not providing us with the tools we need to understand what is happening and how to act for our survival and wellbeing. What went wrong with economics?

Neoclassical economics, which dominated the discipline for most of the 20th century, posits that we are economic agents who make decisions that selfishly maximise our own utility. Utility is a way to capture our goals, and in neoclassical economics our goals are boiled down to a simple assumption that we maximise our individual pleasure from consumption. The second core assumption in neoclassical economics is that, if we all pursue our individual pleasure, we will, through the magic of markets, collectively reach a state of equilibrium where everyone is getting as much pleasure as they can without anyone else reducing their pleasure.

It’s one thing to design and predict the behaviour of a laptop; it’s another to predict the behaviour of the internet

It is true that people have goals and make decisions in order to achieve them. The above assumptions are not entirely wrong but they are only part of the story. Much research shows that we have multiple, sometimes conflicting goals, and pleasure from consumption is typically only one of many goals, and often not the most important or motivating. We have many priorities other than material self-interest, including social and psychological needs such as freedom, love, dignity, meaning and social connections. People are inconsistent in how they make decisions and pursue goals. We often make decisions with imperfect or incorrect information. Corporations are probably better utility maximisers than individuals, but their utility has little to do with making us happy. Corporate profits might come at the expense of collective utility and social wellbeing. Corporations are social technologies acting for their own purposes, which sometimes coincide with the interests of society, and sometimes not.

Neoclassical economics and its emphasis on utility maximisation and equilibrium is an impediment to understanding key aspects of social behaviour. Its mechanistic approach was developed during the Industrial Revolution and cannot guide us through the evolutionary upheavals of the Information Age. Fortunately, there are new ways to understand the modern economy.

Complexity economics, an interdisciplinary science, grounds our understanding of the economy in behavioural facts and data and in a perspective of the economy as a system, not just a collection of individuals. Behavioural psychology tells us that people make decisions based on simple rules of thumb and myopic reasoning, and that their goals are poorly described by utility. Modelling complex humans requires simplification but, using this data, we can make our simplifications much more realistic. Likewise, ‘Big Data’, agent-based simulations, machine learning and AI give us powerful tools to model the economy as the evolving, networked, disequilibrium system it really is.

Just as a scientific understanding of biology revolutionised medicine, a scientific understanding of the economy as a complex adaptive system has the potential to revolutionise policymaking. By their very nature, complex systems are not fully under anyone’s control. It is one thing to design and predict the behaviour of a laptop; it is another to design and predict the behaviour of the internet. Complex systems have emergent properties that are caused by the interaction of their components yet are qualitatively different than those of the components themselves. Like gardening, in complex-systems engineering we have only limited control over the outcome. To anticipate where policies lead us and how to mobilise action to take us where we need to go, we need to understand the interaction between technology, economics, institutions, politics, psychology and sociology. This requires breaking down disciplinary silos, and the siloed world of academia is not rising to the challenge to give us the guidance we need.

The Information Age poses challenges to governments and institutions unlike any before. Can existing modes of governance do the job? Can regulatory structures adapt for our collective benefit?

New narratives and models of social organisation are emerging. One of the most striking is the Chinese model of the technology-powered authoritarian superorganism. WeChat, China’s ‘everything app’, is Facebook, banking, Uber, eBay and food delivery rolled into one, with a monthly active user base of more than 1 billion. Subsidised by the Chinese government, it lets officials monitor and censor users, and everyone knows this. It is now about to become China’s electronic ID system. China’s facial recognition database already includes most of its citizens, making it possible for the government to track almost everything their citizens do in Orwellian detail. In the words of the Chinese government, this is a good thing for society because ‘trust-keeping is insufficiently rewarded and the costs of breaking trust tend to be low’. So the Chinese government, in cooperation with technology companies, has developed an early version of a social-credit system that monitors good and trustworthy behaviour and automatically rewards or penalises it accordingly.

Westerners recoil in horror at the prospect of a social-credit system run by an authoritarian government. Many Chinese, in contrast, so far seem comfortable with (or at least accepting of) having details of their lives controlled, as long as economic prosperity increases. One can make an analogy with the evolutionary transition from single-celled to multi-celled organisms c700 million years ago. Multi-celled organisms evolved when single-celled organisms gave up their autonomy in favour of the ‘economic benefits’ of multicellularity. Maybe the Chinese are on the avant-garde of a similar transition for human societies? A hierarchically managed superorganism has obvious advantages: it makes it possible to bring enormous cooperative capabilities to solve problems in a focused manner. If the whole world followed the Chinese model, perhaps it would be easier to rapidly coordinate to make the changes needed to address the climate emergency. Of course, the loss of individual freedom and autonomy are highly problematic, as is the potential for ‘cancerous’ actors or ideas to gain control and drive the superorganism to some disastrous end. It is also not hard to imagine these technologies, in particular state-run AIs, enabling a kind of permanent totalitarianism that humans can never escape from – a dystopian, Orwellian Omega Point in contrast to Teilhard’s hopeful vision.

Ever-improving, physical information technologies can create social technologies that are a form of superorganism, a community of human and artificial intelligences. Our new physical information technologies allow collective action on an unprecedented scale. Together, these possibilities create enormous opportunities for social technologies to evolve and exert evolutionary pressure for rapid change. From this perspective, the tensions in the current sociopolitical situation are a natural outcome of a global evolutionary experiment to find the best way to manage an evolving superorganism.

Liberal democracy evolved in the pre-Information Age. Until recently, it seemed on track to become the dominant mode of governance, but it is now being challenged. New hybrid forms of democracy and authoritarianism are emerging, with varying degrees of monitoring and control of individual behaviour. This raises the key question: can democracy adapt to function better in the new environment of the hyperconnected Information Age or will it be replaced in the evolutionary competition of social technologies?

It is no accident that the explosion of innovation emerged largely in open, free market societies

This question is a challenge for our era: suppose we take the emergence of a superorganism for granted. Is the Chinese authoritarian model the only possible way to manage it? Or are there more democratic ways to do this? How can we use information technologies to enhance democracy rather than drive it into polarisation and alienation? Can we use our understanding of social evolution and complex systems to make democracy more responsive, diminishing the influence of vested interests and enforcing fairness?

One of the great strengths of open, democratic societies is that they enable and encourage the exploration of new ideas. Free markets are highly successful in creating evolutionary collaborations and competitions among ideas and technologies, and rapidly scaling up those that are economically successful. It is no accident that the explosion of innovation in physical technologies over the past two centuries emerged largely in open, free market societies. The social technologies of those societies accelerated the speed of physical technology evolution. The process has fed on itself – each new technology creates the possibility of other new technologies, and the space of possibilities for innovation opens up in a combinatorial explosion, creating our hyperexponential acceleration of physical technologies.

But the social technologies have not kept up – the impact of social media and ‘fake news’ on politics is but one recent example. Our social technologies are rapidly becoming less well-adapted to the reality of our physical technologies. As the gap between our physical and social technologies widens, the strains on society grow, and the space for anger, populism and authoritarian solutions increases. This leaves us with only two solutions: slow the pace of physical technology evolution, which is probably impossible; or break through the gridlock and engage in a major bout of institutional, regulatory and even constitutional reform to get the democratic system evolving again. Without overwhelming political pressure to do that, we might find that the gap widens to a terminal crisis for democracy.

Possible solutions exist. Immediate practical steps include breaking up the giant platform monopolies, regulating them as the media companies they are, and creating individual digital property rights. Such changes would force these companies to change their business models from profiting out of generating moral outrage, hacking our psychology, circulating fake news, creating addictive behaviours, harvesting our personal data and maximising environmentally damaging consumption, to more socially constructive uses. Likewise, we can utilise new models for democratic engagement, such as ‘citizen juries’ for supporting important policy debates. We can implement democratic governance of our increasingly integrated critical infrastructures. We can build smart transportation infrastructure to mitigate worker replacement due to automation. We can reorganise our educational system to foster flexibility. We can develop a global consciousness regarding our key challenges, such as climate, by using the same mechanisms as previously used so successfully to develop nationalism. We can redefine the measure of economic success by replacing GDP with new metrics that focus on solving human problems and promoting human wellbeing.

There is no Omega Point – the current noosphere has not reached the pinnacle of what is possible. It never will. Evolution is an open-ended process of continuous and infinite change. There is no optimum, no resting point, no ultimate direction or goal. If we are to create a socioeconomic-technological system that serves the broad interests of humanity and the other species we share our planet with, it will only be because we have sufficiently understood the complex system we live in to harness the power of evolution and shape it in that direction.

We need to understand that, while we can shape evolution to give us ‘better’, there is no utopia. All we can hope for is what the philosopher Karl Popper in 1945 called ‘piecemeal social engineering’ – we can help to steer the system in a positive direction, one evolutionary step at a time. There is strong evidence that such directed changes have enabled societies to become more harmonious and less violent. But, by its very nature, evolutionary selection is a competitive process, and it is unlikely that conflict will disappear. The emergence of cyanobacteria 3 billion years ago introduced oxygen into the atmosphere, killing most of the rest of life on the planet but enabling all modern forms of life. And multicellular organisms and ultimately humans were initially enabled by collaborations between the microorganisms bacteria and archaea. The balance between competition and collaboration and the interplay between them is an essential part of evolution, and its outcomes are often unpredictable – ‘bad’ things often lead to ‘good’ things, and vice versa.

One thing we can count on is that we are not going to stop the noosphere from evolving. Vast changes lie ahead of us. The evolutionary view gives us insight into what causes our problems and sets us on track to address them. The more we understand the forces driving these changes, the more we can agree on and act effectively to shape a future that resonates with our personal and collective vision of the world we want for our children.

If the rate of change of physical technologies continues to accelerate, can our social technologies keep up? Will we be left in an eternal state of extreme disequilibrium, with dysfunctional social institutions that are out of touch with the rapidly changing physical substrate of the information world? Are we heading toward becoming part of an authoritarian, hierarchical superorganism? Can we do anything to stop this from happening?

Yes, we can. Social evolution is a process that we all shape, even if we don’t control it. On one hand, the world is a complex system and the evolution of society and technology are out of any individual’s control. What we have been missing is an understanding of how our individual actions, and our interactions, create the emergent macro patterns and evolutionary trajectories of our society. Teilhard presented his vision in the 1950s. The scientific agenda around understanding society as a complex, evolving system has developed only in recent decades. Its insights are slowly making their way into the mainstream, particularly in economics. We are in a race – a potentially existential race – between our ability to understand, shape and direct the complex and evolving social, economic and technological system we live in, versus the potential of that system to destroy freedom, democracy and even human life on Earth. As Teilhard said: ‘Our duty, as men and women, is to proceed as if limits to our ability did not exist. We are collaborators in creation.’

This article grew out of two workshops held at the Santa Fe Institute in August 2017 and 2018. The authors share their thoughts on the evolution of technology with filmmaker Pernille Rose Grønkjær here.
We would like to thank the other participants; for more information see https://www.eudemonicproject.org.