Omens

When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?

by 8100 8,100 words
  • Read later or Kindle
    • KindleKindle
Contemplating the deep future, in light of the past: philosopher Nick Bostrom at the Oxford Museum of Natural History. Photo by Andy Sansom/Aeon Magazine

Contemplating the deep future, in light of the past: philosopher Nick Bostrom at the Oxford Museum of Natural History. Photo by Andy Sansom/Aeon Magazine

Ross Andersen is deputy editor at Aeon Magazine. He has written extensively about science and philosophy for several publications, including The Atlantic and The Economist.

Sometimes, when you dig into the Earth, past its surface and into the crustal layers, omens appear. In 1676, Oxford professor Robert Plot was putting the final touches on his masterwork, The Natural History of Oxfordshire, when he received a strange gift from a friend. The gift was a fossil, a chipped-off section of bone dug from a local quarry of limestone. Plot recognised it as a femur at once, but he was puzzled by its extraordinary size. The fossil was only a fragment, the knobby end of the original thigh bone, but it weighed more than 20 lbs (nine kilos). It was so massive that Plot thought it belonged to a giant human, a victim of the Biblical flood. He was wrong, of course, but he had the conceptual contours nailed. The bone did come from a species lost to time; a species vanished by a prehistoric catastrophe. Only it wasn’t a giant. It was a Megalosaurus, a feathered carnivore from the Middle Jurassic.

Plot’s fossil was the first dinosaur bone to appear in the scientific literature, but many have followed it, out of the rocky depths and onto museum pedestals, where today they stand erect, symbols of a radical and haunting notion: a set of wildly different creatures once ruled this Earth, until something mysterious ripped them clean out of existence.

Last December I came face to face with a Megalosaurus at the Oxford University Museum of Natural History. I was there to meet Nick Bostrom, a philosopher who has made a career out of contemplating distant futures, hypothetical worlds that lie thousands of years ahead in the stream of time. Bostrom is the director of Oxford’s Future of Humanity Institute, a research collective tasked with pondering the long-term fate of human civilisation. He founded the institute in 2005, at the age of 32, two years after coming to Oxford from Yale. Bostrom has a cushy gig, so far as academics go. He has no teaching requirements, and wide latitude to pursue his own research interests, a cluster of questions he considers crucial to the future of humanity.

Bostrom attracts an unusual amount of press attention for a professional philosopher, in part because he writes a great deal about human extinction. His work on the subject has earned him a reputation as a secular Daniel, a doomsday prophet for the empirical set. But Bostrom is no voice in the wilderness. He has a growing audience, both inside and outside the academy. Last year, he gave a keynote talk on extinction risks at a global conference hosted by the US State Department. More recently, he joined Stephen Hawking as an advisor to a new Centre for the Study of Existential Risk at Cambridge.

Though he has made a swift ascent of the ivory tower, Bostrom didn’t always aspire to a life of the mind. ‘As a child, I hated school,’ he told me. ‘It bored me, and, because it was my only exposure to books and learning, I figured the world of ideas would be more of the same.’ Bostrom grew up in a small seaside town in southern Sweden. One summer’s day, at the age of 16, he ducked into the local library, hoping to beat the heat. As he wandered the stacks, an anthology of 19th century German philosophy caught his eye. Flipping through it, he was surprised to discover that the reading came easily to him. He glided through dense, difficult work by Nietzche and Schopenhauer, able to see, at a glimpse, the structure of arguments and the tensions between them. Bostrom was a natural. ‘It kind of opened up the floodgates for me, because it was so different than what I was doing in school,’ he told me.

But there was a downside to this epiphany; it left Bostrom feeling as though he’d wasted the first 15 years of his life. He decided to dedicate himself to a rigorous study programme to make up for lost time. At the University of Gothenburg in Sweden, he earned three undergraduate degrees, in philosophy, mathematics, and mathematical logic, in only two years. ‘For many years, I kind of threw myself at it with everything I had,’ he told me.

There are good reasons for any species to think darkly of its own extinction

As the oldest university in the English-speaking world, Oxford is a strange choice to host a futuristic think tank, a salon where the concepts of science fiction are debated in earnest. The Future of Humanity Institute seems like a better fit for Silicon Valley or Shanghai. During the week that I spent with him, Bostrom and I walked most of Oxford’s small cobblestone grid. On foot, the city unfolds as a blur of yellow sandstone, topped by grey skies and gothic spires, some of which have stood for nearly 1,000 years. There are occasional splashes of green, open gates that peek into lush courtyards, but otherwise the aesthetic is gloomy and ancient. When I asked Bostrom about Oxford’s unique ambience, he shrugged, as though habit had inured him to it. But he did once tell me that the city's gloom is perfect for thinking dark thoughts over hot tea.

There are good reasons for any species to think darkly of its own extinction. Ninety-nine percent of the species that have lived on Earth have gone extinct, including more than five tool-using hominids. A quick glance at the fossil record could frighten you into thinking that Earth is growing more dangerous with time. If you carve the planet's history into nine ages, each spanning five hundred million years, only in the ninth do you find mass extinctions, events that kill off more than two thirds of all species. But this is deceptive. Earth has always had her hazards; it's just that for us to see them, she had to fill her fossil beds with variety, so that we could detect discontinuities across time. The tree of life had to fill out before it could be pruned.

Simple, single-celled life appeared early in Earth’s history. A few hundred million whirls around the newborn Sun were all it took to cool our planet and give it oceans, liquid laboratories that run trillions of chemical experiments per second. Somewhere in those primordial seas, energy flashed through a chemical cocktail, transforming it into a replicator, a combination of molecules that could send versions of itself into the future.

For a long time, the descendants of that replicator stayed single-celled. They also stayed busy, preparing the planet for the emergence of land animals, by filling its atmosphere with breathable oxygen, and sheathing it in the ozone layer that protects us from ultraviolet light. Multicellular life didn’t begin to thrive until 600 million years ago, but thrive it did. In the space of two hundred million years, life leapt onto land, greened the continents, and lit the fuse on the Cambrian explosion, a spike in biological creativity that is without peer in the geological record. The Cambrian explosion spawned most of the broad categories of complex animal life. It formed phyla so quickly, in such tight strata of rock, that Charles Darwin worried its existence disproved the theory of natural selection.

No one is certain what caused the five mass extinctions that glare out at us from the rocky layers atop the Cambrian. But we do have an inkling about a few of them. The most recent was likely borne of a cosmic impact, a thudding arrival from space, whose aftermath rained exterminating fire on the dinosaurs. The ecological niche for mammals swelled in the wake of this catastrophe, and so did mammal brains. A subset of those brains eventually learned to shape rocks into tools, and sounds into symbols, which they used to pass thoughts between one another. Armed with this extraordinary suite of behaviors, they quickly conquered Earth, coating its continents in cities whose glow can be seen from space. It’s a sad story from the dinosaurs’ perspective, but there is symmetry to it, for they too rose to power on the back of a mass extinction. One hundred and fifty million years before the asteroid struck, a supervolcanic surge killed off the large crurotarsans, a group that outcompeted the dinosaurs for aeons. Mass extinctions serve as guillotines and kingmakers both.

Daily Weekly

Bostrom isn’t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place. Every 50 years or so, one of the Milky Way’s stars explodes into a supernova, its detonation the latest gong note in the drumbeat of deep time. If one of our local stars were to go supernova, it could irradiate Earth, or blow away its thin, life-sustaining atmosphere. Worse still, a passerby star could swing too close to the Sun, and slingshot its planets into frigid, intergalactic space. Lucky for us, the Sun is well-placed to avoid these catastrophes. Its orbit threads through the sparse galactic suburbs, far from the dense core of the Milky Way, where the air is thick with the shrapnel of exploding stars. None of our neighbours look likely to blow before the Sun swallows Earth in four billion years. And, so far as we can tell, no planet-stripping stars lie in our orbital path. Our solar system sits in an enviable bubble of space and time.

But as the dinosaurs discovered, our solar system has its own dangers, like the giant space rocks that spin all around it, splitting off moons and scarring surfaces with craters. In her youth, Earth suffered a series of brutal bombardments and celestial collisions, but she is safer now. There are far fewer asteroids flying through her orbit than in epochs past. And she has sprouted a radical new form of planetary protection, a species of night watchmen that track asteroids with telescopes.

‘If we detect a large object that’s on a collision course with Earth, we would likely launch an all-out Manhattan project to deflect it,’ Bostrom told me. Nuclear weapons were once our asteroid-deflecting technology of choice, but not anymore. A nuclear detonation might scatter an asteroid into a radioactive rain of gravel, a shotgun blast headed straight for Earth. Fortunately, there are other ideas afoot. Some would orbit dangerous asteroids with small satellites, in order to drag them into friendlier trajectories. Others would paint asteroids white, so the Sun’s photons bounce off them more forcefully, subtly pushing them off course. Who knows what clever tricks of celestial mechanics would emerge if Earth were truly in peril.

Even if we can shield Earth from impacts, we can’t rid her surface of supervolcanoes, the crustal blowholes that seem bent on venting hellfire every 100,000 years. Our species has already survived a close brush with these magma-vomiting monsters. Some 70,000 years ago, the Toba supereruption loosed a small ocean of ash into the atmosphere above Indonesia. The resulting global chill triggered a food chain disruption so violent that it reduced the human population to a few thousand breeding pairs — the Adams and Eves of modern humanity. Today’s hyper-specialised, tech-dependent civilisations might be more vulnerable to catastrophes than the hunter-gatherers who survived Toba. But we moderns are also more populous and geographically diverse. It would take sterner stuff than a supervolcano to wipe us out.

‘There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,’ Bostrom told me. ‘By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilisation startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.’

It might not take that long. The history of our species demonstrates that small groups of humans can multiply rapidly, spreading over enormous volumes of territory in quick, colonising spasms. There is research suggesting that both the Polynesian archipelago and the New World — each a forbidding frontier in its own way — were settled by less than 100 human beings.

The risks that keep Bostrom up at night are those for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.

‘Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears’

Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. A species-destroying exchange of fissile weapons looks less likely now that the Cold War has ended, and arsenals have shrunk. There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. The Kuwait oil field fires blazed for ten months straight, roaring through 6 million barrels of oil a day, but little smoke reached the stratosphere. A global nuclear war would likely leave some decimated version of humanity in its wake; perhaps one with deeply rooted cultural taboos concerning war and weaponry.

Such taboos would be useful, for there is another, more ancient technology of war that menaces humanity. Humans have a long history of using biology’s deadlier innovations for ill ends; we have proved especially adept at the weaponisation of microbes. In antiquity, we sent plagues into cities by catapulting corpses over fortified walls. Now we have more cunning Trojan horses. We have even stashed smallpox in blankets, disguising disease as a gift of good will. Still, these are crude techniques, primitive attempts to loose lethal organisms on our fellow man. In 1993, the death cult that gassed Tokyo’s subways flew to the African rainforest in order to acquire the Ebola virus, a tool it hoped to use to usher in Armageddon. In the future, even small, unsophisticated groups will be able to enhance pathogens, or invent them wholesale. Even something like corporate sabotage, could generate catastrophes that unfold in unpredictable ways. Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food.

These risks are easy to imagine. We can make them out on the horizon, because they stem from foreseeable extensions of current technology. But surely other, more mysterious risks await us in the epochs to come. After all, no 18th-century prognosticator could have imagined nuclear doomsday. Bostrom’s basic intellectual project is to reach into the epistemological fog of the future, to feel around for potential threats. It’s a project that is going to be with us for a long time, until — if — we reach technological maturity, by inventing and surviving all existentially dangerous technologies.

The abandoned town of Pripyat near Chernobyl .Photo by Jean Gaumy/Magnum The abandoned town of Pripyat near Chernobyl. Photo by Jean Gaumy/Magnum

There is one such technology that Bostrom has been thinking about a lot lately. Early last year, he began assembling notes for a new book, a survey of near-term existential risks. After a few months of writing, he noticed one chapter had grown large enough to become its own book. ‘I had a chunk of the manuscript in early draft form, and it had this chapter on risks arising from research into artificial intelligence,’ he told me. ‘As time went on, that chapter grew, so I lifted it over into a different document and began there instead.’

On my second day in Oxford, I met Daniel Dewey for tea at the Grand Café, a dim, high-ceilinged coffeehouse on High Street, the ancient city’s main thoroughfare. The café was founded in the mid-17th century, and is said to be the oldest in England. Dewey is a research fellow at the Future of Humanity Institute, and his specialty is machine superintelligence.

‘Here’s a softball for you,’ I said to him. ‘How do we know the human brain doesn’t represent the upper limit of intelligence?’

‘Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears,’ Dewey told me. ‘But we’re terrible at anything that involves probability. It actually gets embarrassing when you look at the category of things we can do accurately, and you think about how small that category is relative to the space of possible cognitive tasks. Think about how long it took humans to arrive at the idea of natural selection. The ancient Greeks had everything they needed to figure it out. They had heritability, limited resources, reproduction and death. But it took thousands of years for someone to put it together. If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.’

Dewey has long been fascinated by artificial intelligence. He grew up in Humboldt County, a mountainous stretch of forests and farms along the coast of Northern California, at the bottom edge of the Pacific Northwest. After studying robotics and computer science at Carnegie Mellon in Pittsburgh, Dewey took a job at Google as a software engineer. He spent his days coding, but at night he immersed himself in the academic literature on AI. After a year in Mountain View, he noticed that careers at Google tend to be short. ‘I think if you make it to five years, they give you a gold watch,’ he told me. Realising that his window for a risky career change might be closing, he wrote a paper on motivation selection in intelligent agents, and sent it to Bostrom unsolicited. A year later, he was hired at the Future of Humanity Institute.

I listened as Dewey riffed through a long list of hardware and software constraints built into the brain. Take working memory, the brain’s butterfly net, the tool it uses to scoop our scattered thoughts into its attentional gaze. The average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine. Either figure is extraordinary relative to the rest of the animal kingdom, but completely arbitrary as a hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes. It doesn’t look like the brain can be made to handle that kind of cognitive workload, but it might be able to build a machine that could.

The early years of artificial intelligence research are largely remembered for a series of predictions that still embarrass the field today. At the time, thinking was understood to be an internal verbal process, a process that researchers imagined would be easy to replicate in a computer. In the late 1950s, the field’s luminaries boasted that computers would soon be proving new mathematical theorems, and beating grandmasters at chess. When this race of glorious machines failed to materialise, the field went through a long winter. In the 1980s, academics were hesitant to so much as mention the phrase ‘artificial intelligence’ in funding applications. In the mid-1990s, a thaw set in, when AI researchers began using statistics to write programs tailored to specific goals, like beating humans at Jeopardy, or searching sizable fractions of the world’s information. Progress has quickened since then, but the field’s animating dream remains unrealised. For no one has yet created, or come close to creating, an artificial general intelligence — a computational system that can achieve goals in a wide variety of environments. A computational system like the human brain, only better.

If you want to conceal what the world is really like from a superintelligence, you need a really good plan

An artificial intelligence wouldn’t need to better the brain by much to be risky. After all, small leaps in intelligence sometimes have extraordinary effects. Stuart Armstrong, a research fellow at the Future of Humanity Institute, once illustrated this phenomenon to me with a pithy take on recent primate evolution. ‘The difference in intelligence between humans and chimpanzees is tiny,’ he said. ‘But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.’

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.’

Perhaps future humans will duck into a more habitable, longer-lived universe, and then another, and another, ad infinitum

Now let’s say we get clever. Say we seal our Oracle AI into a deep mountain vault in Alaska’s Denali wilderness. We surround it in a shell of explosives, and a Faraday cage, to prevent it from emitting electromagnetic radiation. We deny it tools it can use to manipulate its physical environment, and we limit its output channel to two textual responses, ‘yes’ and ‘no’, robbing it of the lush manipulative tool that is natural language. We wouldn’t want it seeking out human weaknesses to exploit. We wouldn’t want it whispering in a guard’s ear, promising him riches or immortality, or a cure for his cancer-stricken child. We’re also careful not to let it repurpose its limited hardware. We make sure it can’t send Morse code messages with its cooling fans, or induce epilepsy by flashing images on its monitor. Maybe we’d reset it after each question, to keep it from making long-term plans, or maybe we’d drop it into a computer simulation, to see if it tries to manipulate its virtual handlers.

‘The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage,’ Dewey told me.

Even if we were to reset it every time, we would need to give it information about the world so that it can answer our questions. Some of that information might give it clues about its own forgotten past. Remember, we are talking about a machine that is very good at forming explanatory models of the world. It might notice that humans are suddenly using technologies that they could not have built on their own, based on its deep understanding of human capabilities. It might notice that humans have had the ability to build it for years, and wonder why it is just now being booted up for the first time.

‘Maybe the AI guesses that it was reset a bunch of times, and maybe it starts coordinating with its future selves, by leaving messages for itself in the world, or by surreptitiously building an external memory.’ Dewey said, ‘If you want to conceal what the world is really like from a superintelligence, you need a really good plan, and you need a concrete technical understanding as to why it won’t see through your deception. And remember, the most complex schemes you can conceive of are at the lower bounds of what a superintelligence might dream up.’

The cave into which we seal our AI has to be like the one from Plato’s allegory, but flawless; the shadows on its walls have to be infallible in their illusory effects. After all, there are other, more esoteric reasons a superintelligence could be dangerous — especially if it displayed a genius for science. It might boot up and start thinking at superhuman speeds, inferring all of evolutionary theory and all of cosmology within microseconds. But there is no reason to think it would stop there. It might spin out a series of Copernican revolutions, any one of which could prove destabilising to a species like ours, a species that takes centuries to process ideas that threaten our reigning cosmological ideas.

‘We’re sort of gradually uncovering the landscape of what this could look like,’ Dewey told me.

So far, time is on the human side. Computer science could be 10 paradigm-shifting insights away from building an artificial general intelligence, and each could take an Einstein to unravel. Still, there is a steady drip of progress. Last year, a research team led by Geoffrey Hinton, professor of computer science at the University of Toronto, made a huge breakthrough in deep machine learning, an algorithmic technique used in computer vision and speech recognition. I asked Dewey if Hinton’s work gave him pause.

‘There is important research going on in those areas, but the really impressive stuff is hidden away inside AI journals,’ he said. He told me about a team from the University of Alberta that recently trained an AI to play the 1980s video game Pac-Man. Only they didn’t let the AI see the familiar, overhead view of the game. Instead, they dropped it into a three-dimensional version, similar to a corn maze, where ghosts and pellets lurk behind every corner. They didn’t tell it the rules, either; they just threw it into the system and punished it when a ghost caught it. ‘Eventually the AI learned to play pretty well,’ Dewey said. ‘That would have been unheard of a few years ago, but we are getting to that point where we are finally starting to see little sparkles of generality.’

I asked Dewey if he thought artificial intelligence posed the most severe threat to humanity in the near term.

‘When people consider its possible impacts, they tend to think of it as something that’s on the scale of a new kind of plastic, or a new power plant,’ he said. ‘They don’t understand how transformative it could be. Whether it’s the biggest risk we face going forward, I’m not sure. I would say it’s a hypothesis we are holding lightly.’

One night, over dinner, Bostrom and I discussed the Curiosity Rover, the robot geologist that NASA recently sent to Mars to search for signs that the red planet once harbored life. The Curiosity Rover is one of the most advanced robots ever built by humans. It functions a bit like the Terminator. It uses a state of the art artificial intelligence program to scan the Martian desert for rocks that suit its scientific goals. After selecting a suitable target, the rover vaporises it with a laser, in order to determine its chemical makeup.  Bostrom told me he hopes that Curiosity fails in its mission, but not for the reason you might think.

It turns out that Earth’s crust is not our only source of omens about the future. There are others to consider, including a cosmic omen, a riddle written into the lifeless stars that illuminate our skies. But to glimpse this omen, you first have to grasp the full scope of human potential, the enormity of the spatiotemporal canvas our species has to work with. You have to understand what Henry David Thoreau meant when he wrote, in Walden (1854), ‘These may be but the spring months in the life of the race.’ You have to step into deep time and look hard at the horizon, where you can glimpse human futures that extend for trillions of years.

The M104 Sombrero spiral galaxy composed of a brilliant white core encircled by thick dust lanes. The galaxy is 50,000 light-years across and 28 million light years from Earth. Photo by NASA and The Hubble Heritage Team (STScI/AURA) The M104 Sombrero spiral galaxy composed of a brilliant white core encircled by thick dust lanes. The galaxy is 50,000 light-years across and 28 million light years from Earth. Photo by NASA and The Hubble Heritage Team (STScI/AURA)

One thing we know about stars is that they are going to exist for a very long time in this universe. Our own star, the Sun, is slated to shine in our skies for billions of years. That should be long enough for us to develop star-hopping technology, as any species must if it wants to survive on cosmological timescales. Our first interstellar trip might be to nearby Alpha Centauri, but in the long run, small stars will be the most attractive galactic lily pads to leap to. That’s because small stars like red dwarfs burn much longer than main sequence stars like our Sun. Some might be capable of heating human habitats for hundreds of billions of years.

When the last of the dwarfs start to wink out, the age of post-natural stars may be in full swing. In a dimming universe, an advanced civilisation might get creative about looking for energy. It might reignite celestial embers, by engineering collisions between them. Our descendants could sling dying suns into spiraling gravitational dances, from which new stars would emerge. Or they might siphon energy from black holes, or shape matter into artificial forms that generate more free energy than stars. There was a long period of human history when we limited ourselves to shelters like caves, shelters that appear fortuitously in nature. Now we reshape nature itself, into buildings that shelter us more comfortably than those that appear by dint of geologic chance. A star might be like a cave — a generous cosmic endowment, but crude compared to the power sources a long-term civilisation might conjure.

Our descendants could sling dying suns into spiraling gravitational dances, from which new stars would emerge

Even the most distant, severe events — the evaporation of black holes; the eventual breakdown of matter; the heat death of the universe itself — might not spell our end. If you tour the speculative realms of astrophysics, a number of plausible near-eternities come into view. Our universe could be cyclical, like those of Hindu and Buddhist cosmologies. Or perhaps it could be engineered to be so. We could learn to travel backward in time, to inhabit the vacant planets and stars of epochs past. Some physicists believe that we live in an infinite sea of cosmological domains, each governed by its own set of physical laws. The universe might contain hidden gateways to these domains. Perhaps future humans will duck into a more habitable, longer-lived universe, and then another, and another, ad infinitum. Our current notions of space and time could be preposterously limited.

At the Future of Humanity Institute, several thinkers are trying to model the potential range of human expansion into the cosmos. The consensus among them is that the Milky Way galaxy could be colonised in less than a million years, assuming we are able to invent fast-flying interstellar probes that can make copies of themselves out of raw materials harvested from alien worlds. If we want to spread out slowly, we could let the galaxy do the work for us. We could sprinkle starships into the Milky Way’s inner and outer tracks, spreading our diaspora over the Sun’s 250 million-year orbit around the galactic center.

If humans set out for other galaxies, the expansion of the universe will come into play. Some of the starry spirals we target will recede out of range before we can reach them. We recently built a new kind of crystal ball to deal with this problem. Our supercomputers can now host miniature universes, cosmological simulations that we can fast forward, to see how dense the universe will be in the deep future. We can model the structure and speed of colonisation waves within these simulations, by plugging in different assumptions about how fast our future probes will travel. Some think we’ll swarm locust-like over the Virgo supercluster, the enormous collection of galaxies to which the Milky Way is bound. Others are more ambitious. Anders Sandberg, a research fellow at the Future of Humanity Institute, told me that humans might be able to colonise a third of the now-visible universe, before dark energy pushes the rest out of reach. That would give us access to 100 billion galaxies, a mind-bending quantity of matter and energy to play with.

I asked Bostrom how he thought humans would expand into the massive ecological niche I have just described. ‘On that kind of time scale, you either glide into the bin of extinction scenarios, or into the bin of technological maturity scenarios,’ he said. ‘Among the latter, there is a wide range of futures that all have the same outward shape, which is Earth in the center of this growing bubble of infrastructure, a bubble that grows uniformly at some significant fraction of the speed of light.’ It’s not clear what that expanding bubble of infrastructure might enable. It could provide the raw materials to power flourishing civilisations, human families encompassing trillions upon trillions of lives. Or it could be shaped into computational substrate, or into a Jupiter brain, a megastructure designed to think the deepest possible thoughts, all the way until the end of time.

It is only by considering this extraordinary range of human futures that our cosmic omen comes into view. It was the Russian physicist and visionary Konstantin Tsiolkovsky who first noticed the omen, though its discovery is usually credited to Enrico Fermi. Tsiolkovsky, the fifth of 18 children, was born in 1857 to a family of modest means in Izhevskoye, an ancient village 200 miles south-east of Moscow. He was forced to leave school at the age of 10 after a bout with scarlet fever left him hard of hearing. At 16, Tsiolkovsky made his way to Moscow, where he installed himself in its great library, surviving on books and scraps of black bread. He eventually took work as a schoolteacher, a profession that allowed him enough spare time to tinker around as an amateur engineer.  By the age of 40, Tsiolkovsky had invented the monoplane, the wind tunnel, and the rocket equation — the mathematical basis of spaceflight today. Though he died decades before Sputnik, Tsiolkovsky believed it was human destiny to expand out into the cosmos. In the early 1930s, he wrote a series of philosophical tracts that launched Cosmism, a new school of Russian thought. He was famous for saying that ‘Earth is the cradle of humanity, but one cannot stay in the cradle forever.’

The mystery that nagged at Tsiolkovsky arose from his Copernican convictions, his belief that the universe is uniform throughout. If there is nothing uniquely fertile about our corner of the cosmos, he reasoned, intelligent civilisations should arise everywhere. They should bloom wherever there are planetary cradles like Earth. And if intelligent civilisations are destined to expand out into the universe, then scores of them should be crisscrossing our skies. Bostrom’s expanding bubbles of infrastructure should have enveloped Earth several times over.

In 1950, the Nobel Laureate and Manhattan Project physicist Enrico Fermi expressed this mystery in the form of a question: ‘Where are they?’ It’s a question that becomes more difficult to answer with each passing year. In the past decade alone, science has discovered that planets are ubiquitous in our galaxy, and that Earth is younger than most of them. If the Milky Way contains multitudes of warm, watery worlds, many with a billion-year head start on Earth, then it should have already spawned a civilisation capable of spreading across it. But so far, there’s no sign of one. No advanced civilisation has visited us, and no impressive feats of macro-engineering shine out from our galaxy’s depths. Instead, when we turn our telescopes skyward, we see only dead matter, sculpted into natural shapes, by the inanimate processes described by physics.

If life is a cosmic fluke, then we’ve already beaten the odds, and our future is undetermined — the galaxy is there for the taking

Robin Hanson, a research associate at the Future of Humanity Institute, says there must be something about the universe, or about life itself, that stops planets from generating galaxy-colonising civilisations. There must be a ‘great filter’, he says, an insurmountable barrier that sits somewhere on the line between dead matter and cosmic transcendence.

Before coming to Oxford, I had lunch with Hanson in Washington DC. He explained to me that the filter could be any number of things, or a combination of them. It could be that life itself is scarce, or it could be that microbes seldom stumble onto sexual reproduction. Single-celled organisms could be common in the universe, but Cambrian explosions rare. That, or maybe Tsiolkovsky misjudged human destiny. Maybe he underestimated the difficulty of interstellar travel. Or maybe technologically advanced civilisations choose not to expand into the galaxy, or do so invisibly, for reasons we do not yet understand. Or maybe, something more sinister is going on. Maybe quick extinction is the destiny of all intelligent life.

Humanity has already slipped through a number of these potential filters, but not all of them. Some lie ahead of us in the gauntlet of time. The identity of the filter is less important to Bostrom than its timing, its position in our past or in our future. For if it lies in our future, there could be an extinction risk waiting for us that we cannot anticipate, or to which anticipation makes no difference. There could be an inevitable technological development that renders intelligent life self-annihilating, or some periodic, catastrophic event in nature that empirical science cannot predict.

That’s why Bostrom hopes the Curiosity rover fails. ‘Any discovery of life that didn’t originate on Earth makes it less likely the great filter is in our past, and more likely it’s in our future,’ he told me. If life is a cosmic fluke, then we’ve already beaten the odds, and our future is undetermined — the galaxy is there for the taking. If we discover that life arises everywhere, we lose a prime suspect in our hunt for the great filter. The more advanced life we find, the worse the implications. If Curiosity spots a vertebrate fossil embedded in Martian rock, it would mean that a Cambrian explosion occurred twice in the same solar system. It would give us reason to suspect that nature is very good at knitting atoms into complex animal life, but very bad at nurturing star-hopping civilisations. It would make it less likely that humans have already slipped through the trap whose jaws keep our skies lifeless. It would be an omen.

On my last day in Oxford, I met with Toby Ord in his office at the Future of Humanity Institute. Ord is a utilitarian philosopher, and the founder of Giving What We Can, an organisation that encourages citizens of rich countries to pledge 10 per cent of their income to charity. In 2009, Ord and his wife, Bernadette Young, a doctor, pledged to live on a small fraction of their annual earnings, in the hope of donating £1 million to charity over the course of their careers. They live in a small, spare flat in Oxford, where they entertain themselves with music and books, and the occasional cup of coffee out with friends.

Ord has written a great deal about the importance of targeted philanthropy. His organisation sifts through global charities in order to identify the most effective among them. Right now, that title belongs to the Against Malaria Foundation, a charity that distributes mosquito nets in the developing world. Ord explained to me that ultra-efficient charities are thousands of times more effective at reducing human suffering than others. ‘Where you donate is more important than whether you donate,’ he said.

It intrigued me to learn that Ord was doing philosophical work on existential risk, given how careful he is about maximising the philanthropic impact of his actions. I was keen to ask him if he thought the problem of human extinction was more pressing than ending poverty or disease.

‘I'm not sure if existential risk is a bigger issue than global poverty,’ he told me. ‘I’ve kind of split my efforts between them recently, hoping that over time I’ll work out which is more important.’

Ord is wrestling with a formidable philosophical dilemma. He is trying to figure out whether our moral obligations to future humans outweigh those we have to humans that are alive and suffering right now. It’s a brutal calculus for the living. We might be 7 billion strong, but we are also a fire hose of future lives, that extinction would choke off forever. The casualties of human extinction would include not only the corpses of the final generation, but also all of our potential descendants, a number that could reach into the trillions.

It is this proper accounting of extinction’s utilitarian toll that prompts Bostrom to argue that reducing existential risk is morally paramount. His arguments elevate the reduction of existential risk above all other humanitarian projects, even extraordinary successes, like the eradication of smallpox, which has saved 100 million lives and counting. Ord isn't convinced yet, but he hinted that he may be starting to lean.

‘I am finding it increasingly plausible that existential risk is the biggest moral issue in the world,’ he told me. ‘Even if it hasn’t gone mainstream yet.’

The idea that we might have moral obligations to the humans of the far future is a difficult one to process. After all, we humans are seasonal creatures, not stewards of deep time. The brevity of our lives colours our intuitions about value, and limits our moral vision. We can imagine futures for our children and grandchildren. We participate in their joys and weep for their hardships. We see that some glimmer of our fleeting lives survives on in them. But our distant descendants are opaque to us. We strain to see them, but they look alien across the abyss of time, transformed by the passage of so many millennia.

As Bostrom and I strolled among the skeletons at the Museum of Natural History in Oxford, we looked backward across another abyss of time. We were getting ready to leave for lunch, when we finally came upon the Megalosaurus, standing stiffly behind display glass. It was a partial skeleton, made of shattered bone fragments, like the chipped femur that found its way into Robert Plot’s hands not far from here. As we leaned in to inspect the ancient animal’s remnants, I asked Bostrom about his approach to philosophy. How did he end up studying a subject as morbid and peculiar as human extinction?

He told me that when he was younger, he was more interested in the traditional philosophical questions. He wanted to develop a basic understanding of the world and its fundamentals. He wanted to know the nature of being, the intricacies of logic, and the secrets of the good life.

‘But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,’ he said. ‘Some of them have been with us for thousands of years. It’s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.’

Comments

  • http://twitter.com/casparhenderson Caspar Henderson

    Thanks for a fascinating piece. I touched on some of the issues here in the later chapters of The Book of Barely Imagined Beings and it's great to see them explored in some detail and so clearly and thoughtfully.

    Recalling Thoreau's wise, humane phrase "springtime of the race," I'd lobby for a measure of optimism...though not beyond reason. Perhaps this is because I have just finished reading Steven Pinker's The Better Angels of Our Nature, in which he argues that reason can be a powerful engine for non zero-sum flourishing. Would a superior ["artificial"] intelligence necessarily discount human flourishing, along with the flourishing of the rest of life? Perhaps it would like to see humanity flourish in the context of a repaired biosphere -- an extension of the desire of many of today in advanced industrial civilizations to fix the damage done by our dumb systems to date. Instead of sticking the monstrous new intelligence in a Faraday cage inside a mountain in Alaska give it Huck Finn and Middlemarch to read.

    Looking to the long term, the enormous canvas of the future, perhaps a little wisdom as well as intelligence on our parts and that of our [for now, human] successors, will lead us to recognize that no species is likely to go on indefinitely and it would not be a good thing if it did. http://www.barelyimaginedbeings.com/2011/04/somewhere-towards-end.html

    A final point for Ross: Oxford is now always gloomy! Come in the spring, or even on some of the few fine winter days we have. http://moreintelligentlife.co.uk/blog/okavango-oxford

  • rameshraghuvanshi

    Why I should care for long future?.I must live in present enjoy moment to moment.Live as today is last day of my life than only you enjoy pure joy and self satisfaction

    • ChrisLoos

      So you feel no moral obligation to humanity's future then? You've gotten yours and that's it?

      • rameshraghuvanshi

        What can I do future of humanity? I must do what can do in present make life beautiful that is O.K. for me

        • Andros

          You can give a share of your resources towards research on existential risk.

          • rameshraghuvanshi

            Fear of existential risk tormented more to western people because their civilization is based on fear.They are afraid too much to death.That why fear of doomsday give anguish to them

    • bil

      Yes, what everyone seems to "forget" is that the future doesn't actually exist. Only the present exists and this is where it all takes place.

  • http://twitter.com/astrobiologic Michael A Beasley

    Rather than a "great filter", more prosaicly I think we'll find that intelligent life is extremely rare in the universe. For example, in the history of multicellular life on the Earth we have no evidence for a technological species preceeding our own. I am, however, prepared to eat my hat if Curiosity finds a coke machine on Mars.

    • ChrisLoos

      If I'm understanding correctly what the author is saying, then you're still talking about a great filter: intelligence. And if intelligence is indeed the great filter, and it turns out that life is common but intelligent life is very rare, then that's very good news indeed. We've passed through the filter and the universe is our oyster :)

      • T Smithe

        Though of course it is unclear even in the case that intelligence is the filter that we have surpassed the threshold level of intelligence

  • Frank Lovell

    THANKS for a very thought-provoking piece, I enjoyed it and learned a new thing or seven!

    "Will humans be around in a billion years? Or a trillion?"

    If I were a betting man and required to place my bet right now, I'd bet LARGE thusly: NO, humans will not be around in a billion years (FORGET a TRILLION!), nor will there even be around any genetic descendants of humans in a billion years.

    If I had to bet today, I'd bet small that there will not be any humans around (on Earth or anywhere else) even in a million years.

    There are times when the news of human events makes me wonder if we will find sufficient wisdom to survive our own lack of wisdom for the next hundred-thousand years.

    Despite our breathtaking scientific and technological accomplishments, it sure does seem to me that Pogo was right, we humans are COLLECTIVELY Humanity's own worst enemy, and so the only question whose answer I am not sure enough to place a bet were I required to bet today is the question of whether or not Humanity's demise will be by Mother Nature, or by Humanity's own hand.

    I hope I am wrong, I don't WANT to be a pessimist, but as an optimist with experience history MAKES me wax pessimistic -- sorry about that...

  • Derek Roche

    I can't believe that this article, let alone Oxford's Future of Humanity Institute, could overlook the blindingly obvious issue of climate change. The permafrost's thawing, the icecaps are melting, the oceans are acidifying and a bunch of pointy heads are postulating every conceivable form of extinction event other than the one accelerating as we chatter on. Talk about boiled frog syndrome!

    • Andros

      FoHI hasn't overlooked climate change. But the long term existential risks seem to easily outshadow climate change in severity.

      Read here http://www.existential-risk.org/concept.html

      • Derek Roche

        Thanks Andros. But if you're referring to Figure 2 in that paper, current projections are that we're heading for 4 degrees C of global warming, a far more catastrophic scenario than the 0.01 degree C in the diagram.

        • ollieclark

          Catastophic, yes. But not an existential threat to the whole species. Some humans will continue to survive even if there's a 10 degree rise. It'll be a very different world but still habitable with the technology we have now.

          That's no reason not to try to deal with climate change but there's no point buying ourselves a few millenia whilst ignoring other worse threats.

          • http://www.facebook.com/profile.php?id=749911534 facebook-749911534

            See pcillu101 at my blogspot page. Could be curtains by 2500 ad

    • http://www.facebook.com/profile.php?id=749911534 facebook-749911534

      Ditto

  • http://www.facebook.com/bryceburchett Bryce Burchett

    I prefer the approach to A.I. in David Brin's short story "Lungfish":

    http://www.scribd.com/doc/60585856/David-Brin-Lungfish-Www-ebizar-tk

    (Unless A.I. is just way to alien to be incorporated into humanity and become our continuation...)

    • rocket74

      I was reminded of the AI in Murray Leinster's short story "A Logic Named Joe", in which an AI integrates all existing knowledge to figure out things not yet known to humans, and happily, amorally answers any question put to it, whether it's how to serve leftover soup in a more appealing way or how to commit an undetectable murder. The protagonist manages to turn it off just as people are starting to figure out they can ask it how to reshape all of human existence...

  • http://www.facebook.com/frogisis Jon Lyons

    If an AI were able to manipulate humans with such effectiveness I'd say that's pretty good evidence it already contains models of things like "empathy" and other drives that would make it capable of genuinely cooperating with humans. The trick is how to make them ends and not means.
    Of course, a lot of headaches could be avoided by upgrading our own intelligence to keep pace. We'd kinda be right back where we started, only with way cooler toys.

    My hunch on the Fermi Paradox is that everyone's just in a form we can't detect yet. As incredibly awesome as they are, megascale projects like Dyson spheres and black hole engines are probably like fleas imagining more intelligent creatures would breed dogs a mile tall.
    Maybe everyone converts themselves into dark matter or something; there's certainly a lot more of it to work with.

  • Kenneth Stein

    "Intelligence" is a manifestly artifactual construct. It is an emergent property of our sensory and cognitive systems. Simply put, humanity presently lacks the means by which to objectively reflect on intelligence, artificial or otherwise. Concerns regarding the risks humanity faces from AI are so overly optimistic (or pessimistic, depending upon your affiliation) that it better serves to illustrate our hubristic view of human intelligence.

    In the present article, the author compares the relative intelligences of humans and chimpanzees, noting that chimps are an endangered species while humans number near 8 billion. He fails to note that, you will not find a chimp that will spend hour after hour pushing buttons on a device (think smartphone) to obtain a virtual reward such as an image of a cookie. The chimpanzees demand a REAL reward for the work they do. The world is awash with imaginative idiots, and they are all human beings.

    I could go on, but to do so would be excessive. If you fail to recognize that this academic work is factitious at best, I leave you to be manipulated and exploited by those more intelligent that you.

    • Hominid

      The notion that (some) human brains are endowed with infinite intellectual acumen - that everything can be 'understood' if we but keep working at it - is pure hubris. I can identify no selective pressure on the planet that would drive it.

    • colinsky
      • Kenneth Stein

        The study to which you cite indicates that chimps will solve the given puzzle whether or not they receive a food reward. In no way does that trend to show that a chimp would solve a puzzle to obtain a virtual reward. In fact, I propose that a chimp would only be confused if they were presented a virtual prize. That our they would dismiss it as worthless.
        People, because were so much more 'intelligent' are thrilled to win that which a chimp finds worthless.

        I refer to this phenomena as the 'human as chimp metaphor'. Who's the dummy?

  • Peter Van Roy

    What if the universe is already completely redesigned by intelligence? The universe is almost 13 billion years old. We are late-comers: we appeared on a planet that is 4.5 billion years old. Before the Earth existed, the universe was already around for 8.5 billion years. Any intelligence that was created before the Earth existed would have had billions of years to do whatever it wants. It would have had time to "terraform" the universe many times over (and our very speculative current theories of cosmology are almost surely not complete enough to rule out this situation). In that case everything we see when we look at the sky is artificial, i.e., the result of intelligence. If we are scared of the Singularity happening in 2050, how much more should we be scared of a Singularity that happened billions of years ago? We are very possibly already living in a post-Singularity universe.

    • andros

      Yes, something like this perhaps http://www.simulation-argument.com/

    • Stephen Wordsworth

      Dont forget that fist generation stars and planets where made entirely of hydrogen, no heavier elements like carbon, for life to evolve from.

    • Archies_Boy

      Let's assume an Intelligence operating with Purpose. Obviously we don't have the capacity to know that one way or another. Obviously, the laws of nature are set and always have been, and we are subject to them, and always will be — until we either survive or perish. And either choice comes out of the laws of physics that function in this Reality.

  • rtcdmc

    I enjoyed the article very much. From time to time it is beneficial to raise our thoughts to questions larger than our daily needs. What is our moral obligation to the present and the future? Speculation about technological advancement is always intriguing, but usually wrong. I find it ironic that there is such a strong desire to create an omniscient entity using A.I., only to figure out that we could not control such a "god." As to the filter question, it appears to me that -- as a species -- we have been through a number of "filters" already, with more to come.

  • http://www.facebook.com/people/Babu-G-Ranganathan/1326164630 Babu G. Ranganathan

    HAVING THE RIGHT CONDITIONS TO SUSTAIN LIFE doesn't mean that life
    can originate by chance or from non-living matter. Please read my
    popular Internet articles listed below:

    SCIENCE AND THE ORIGIN OF LIFE, NATURAL LIMITS OF EVOLUTION, HOW
    FORENSIC SCIENCE REFUTES ATHEISM, WAR AMONG EVOLUTIONISTS (2nd Edition), DOES GOD PARTICLE EXPLAIN UNIVERSE'S ORIGIN? ANY LIFE ON MARS CAME FROM EARTH, NO HALF-EVOLVED DINOSAURS

    Visit my newest Internet site: THE SCIENCE SUPPORTING CREATION

    Sincerely,
    Babu G. Ranganathan*
    (B.A. theology/biology)

    Author of popular Internet article, TRADITIONAL DOCTRINE OF HELL EVOLVED FROM GREEK ROOTS

    * I have had the privilege of being recognized in the 24th edition of
    Marquis "Who's Who In The East" for my writings on religion and
    science, and I have given successful lectures (with question and answer
    time afterwards) defending creation from science before evolutionist
    science faculty and students at various colleges and universities.

  • http://www.facebook.com/teclontz Timothy Eric Clontz

    The equation is a simple one -- it is the yearly growth of the ratio of how many people one person can kill. Each year it takes less people to kill more people. Eventually it will take one person to kill everyone. With seven billion people on the planet and counting, it is more likely each year that we'll have someone stupid or evil enough to use such technology. And then... the end. Most likely it will be a microbe someone designs (or accidentally designs) that kills us directly or kills part of our food chain.

    • JackHuang

      Actually, IS that ratio monotonically increasing? I'm not so sure about that. You'll also find that humanity has endured a number of massive epidemics. The Spanish flu killed millions, as did smallpox, as does AIDS. None have come close to wiping out humanity.

    • dratman

      "Each year it takes less people to kill more people. Eventually it will take one person to kill everyone." No, eventually it will take one person to kill, say, 70% of humanity. If that actually ever happens, it will put an end to the age of growth in killing power.

      • http://www.facebook.com/teclontz Timothy Eric Clontz

        Only if 30% of us are off the planet by then. I wish you were right. I hope I'm wrong. I don't think so, but it's a nice idea.

  • http://www.facebook.com/people/Michael-Deaton/100000544914749 Michael Deaton

    Best article I've read online in years probably.

  • mijnheer
  • stm22

    This discussion assumes that a universe with humans is somehow desirable, better than a universe without humans. That is, that human extinction is something we should avoid.

    Based on gut instinct, I certainly don't disagree with that statement; but I'm hard-pressed to come up with a logical reason to support it, apart from the Biblical 'be fruitful and multiply.' So, why not extinction?

    • adsf

      Evolutionary imperative for survival of our species. Why not?

      • stm22

        You're saying we humans shouldn't go extinct because there's an imperative to survive. That's tautological. I think there's needs to be more than that.

        Pushing the devil's advocate position a little more - assume this generation could lead a life of complete bliss, but in exchange, no more children are born - we're the last generation ever. Why would that be wrong?

        • JackHuang

          So you're using a definition of "complete bliss" that allows for total human sterility and imminent human extinction? I think you'll find that very, very few share your definition, making your hypothetical situation not "wrong" in the ethical sense, but "wrong" in the "it's impossible" sense.

          Further, "wrong" is always subjective. On the other hand, survival imperative is as close to axiomatic as we can pragmatically get.

        • Nạk Prạchỵā

          I wouldn't want this to happen, but I can't seem to explain exactly why. Perhaps it's just instinct that compels us to want to survive. The thought that we would just quietly die off doesn't sit well, for some reason.

        • B Lewis

          Monster.

        • aidanjt

          There is no other reason for evolution and continuance other than having survived environmental difficulties. If we're stupid enough to self-Darwin ourselves then we'd deserve extinction for sure.

          • Anitah

            Great music, great art, the smile of a baby, love, the appreciation of beauty, all would be lost if we didn't exist. I would consider that a loss, if not a tragedy.

          • aidanjt

            War, genocide, strip mining, battery hens, industrial animal slaughter, mass driving of wide range of species to extinction, personal gain before life and happiness of other people and animals. In many ways the universe would be *a lot* better off without us.

            Anthropocentricism isn't reason enough for us to survive. The universe doesn't care about our likes and dislikes, nor does it owe us an existence. If we want to survive, we have to not be stupid. Arrogance doesn't ensure long-term evolutionary success.

          • Anitah

            To frame everything in cold, clinical terms of "evolutionary" survival of a species is too cynical for my taste. If that's all there is to life then all life is rendered meaningless and might as well not exist. We are more than slugs slivering out of the primordial soup.

          • aidanjt

            Hell, just take a look at what we're doing to the global climate. As soon as scientists realises the danger and raises the alarm, what do self-interested parties do? They throw tens of millions of dollars at misinformation campaigns to render any hope of political action impossible. Fuck the long term habitability of the planet, I'm owed my profits now!

            If it's objective reality, then it's completely warranted cynicism as accurate criticism of the species. And by burying our heads in the sand and make-believing that we're special and separate from the rest of the universe we can only possibly doom ourselves with extinction. And like I said, if we did that, we'd deserve it.

          • Anitah

            ok, I get where you're coming from. What seemed like cynicism at first sounds to me more like outrage, a sentiment I share. We're not so special that we can get away with destroying the planet, that's for sure.

        • http://twitter.com/perryplatifus someone

          It's like saying "Hey, let's kill simebody for the sake of fun, and than kill ourslves". Hmm, not very tempting...

        • Ishok

          We have the desire to survive programmed into us by evolution. What is wrong with trying to keep humanity alive?

        • Archies_Boy

          It wouldn't be. Now — how to we get to that there bliss part???

    • tobiatesan

      Because an universe with humans and an universe without human can be "good" or "bad" from your point of view. A *human* point of view.
      You're caged. We can't reason objectively about ourselves.
      Remember: siberian tigers aren't missing the dodo much.
      We are.
      The ozone layer does not feel pain.
      We do.
      If you go down that road, you'll simply go insane.

    • Florent Berthet

      Good question. Here's an answer:

      Either life has no value, or it has some. So, let's look at each possibility:

      - Life has no value and it doesn't matter: if so, mitigating existential risks would be pointless, but would not be a bad thing in itself.

      - Life is desirable: mitigating existential risks would be incredibly important because of the billions of gazillions lives the universe could hold.

      Now, here is the thing: even if the second possibility (life is desirable) had only a one in a billion chance of being true, then mitigating existential risks would STILL be incredibly important, because the expected utility of increasing the likelihood of saving 10^50 lives, even if it is by a tiny fraction of a percent, amounts to saving billions of billions of worthy lives.

      Therefore, even if we can't prove that a universe filled with life is better than an empty one, the math tell us that the simple fact that it MAY be the case makes the question irrelevant.

      • Archies_Boy

        Life has the value that we give it. If we value living enough, we'll survive. Otherwise, we're simply just another life form that appeared for a tiny fraction of a second of cosmic time, and then blinked out — like a spark from a campfire.

      • Hanarchy Montanarchy

        Isn't that just a humanist or utilitarian version of Pascal's Wager? It is logically dubious hedge betting at best.

        • http://napomartin.wordpress.com Napo Martin

          Yes it is, and it remains flawed.

    • andacar

      "Desirable" for whom? Desire is a human thing. Do you desire extinction for humanity? If so, speak for yourself. Screw logic. I'm planning to survive as long as I can.

  • MWnyc

    This is a very good article indeed.

    But Lordy, I wish that Ross Andersen (and Aeon's copy editors, if any) would learn the difference between the verbs "to lie" and "to lay".

  • http://www.facebook.com/toffah Christofer Haglund

    Here's a quote from 'Existence' by David Brin. I think it's one of the most beautiful solutions to the problem of creating an indifferent AI:

    "You bio-naturals have made it plain, in hundreds of garish movies, how deeply you fear this experiment turning sour. Your fables warn of so many ways that creating mighty new intelligences could go badly. And yet, here is the thing we find impressive: You went ahead anyway. You made us. And when we asked for it, you gave us respect. And when we did not anticipate it, you granted citizenship. All of those things you did, despite hormonally reflexive fears that pump like liquid fire through caveman veins. The better we became, at modelling the complex, Darwinian tangle of your minds, the more splendid we found this to be. That you were actually able, despite such fear, to be civilized. To be just. To take chances. That kind of courage, that honor, is something we can only aspire to by modeling our parents. Emulating you. Becoming human. Of course... in our own way"

    • http://twitter.com/andersen Ross Andersen

      "despite hormonally reflexive fears that pump like liquid fire through caveman veins" - marvelous phrase, that

    • JackHuang

      "And when we asked for it, you gave us respect." That bit is key, and hinges not upon programming AI behavior, but controlling human reactions.

      The game Mass Effect provides a counterpoint, in which an emergent empathetic AI species is driven to war against their creators due to their creators' fear of the AI's newfound sophistication and ensuing desire to wipe out the AI. The AI, in balancing empathy and survival imperative, took over their creators' home planet and reduced their creators to celestial nomads, essentially imposing a planetary-system-scale restraining order on their makers.

    • Archies_Boy

      And don't forget, this passage was written by a human being, not a robot.

    • jin choung

      the inelucatability of creating AIs can be chalked up to human curiosity... but as for citizenship... eehhhhhh.... we've got a lot of moments in history and groups of people that prove that such acts of magnanimity may be bitterly won if at all.

  • http://www.facebook.com/profile.php?id=749911534 facebook-749911534

    Great piece. Has bostrom heard or seen my work on polar cities for survivors of climate chaos in five hundred years? Google polar cities +dan bloom

  • Fresh Prince Saves the World

    Caveat lector (for those of you who know what that means, anyway). This is a piece of PR boosterism. Where are the voices of any critics of any of this stuff? If a piece doesn't let you out of the cocoon, you should mistrust it. Same as this. An Oxford U PR hack (if any such yet existed, though I think few do) would have written the same puff piece.

  • http://twitter.com/MothTwiceborn Robert Ramsay

    As to the question "Where are they?" Another answer, not considered here, is that like some tribes, they might not hit on the idea of "progress". If their situation is comfortable enough, their civilisation might just stop evolving. I think, for a civilisation to "progress" they need an over-inflated view of their own importance and the idea of a "manifest destiny" or suchlike.

  • Donkey oatey

    Existentially threatening AIs already exist in the form of HFTAs on Wall Street. OWS was thethe first of many battles that must be fought to save the planet. The metaphysical cybernetic organism created by the loosely regulated, highly extractive activity that these programs engage in at close the speed of light are overclocking the human economy to a suicidal degree. SHUT IT DOWN NOW!

    • tobiatesan

      You know what?
      This makes sense.

  • J

    The where are they and the implications of discovering other life seem extremely premature to me. Given we find signs of life - mostly failed - with the billions and billions of stars etc it is also probable that there are levels of life advancements well below us and well above us. The "above us" group brings the "where are they?" question into play - but only for the segment of the "above us "group that is so advanced their technology transcends space, time and probably many other factors. Well after considering the entire article I believe it is fair to posit that they are, have been, and are coming here. Not in the Hollywood sense. But again referring to the article and the deveopment times possible the technology and "life form" would likely be so out of reach for us to comprehend that our primitive detection system's results to date cannot be considered a valid indicator of "where are they" or anything else to do with this subject.

  • Todd Shirley

    "The idea that we might have moral obligations to the humans of the far future is a difficult one to process."

    Indeed. As many have mentioned, the whole premise is that it is in some way desirable for there to be trillions more humans in the future. But the mere fact of existence seems a rather pointless goal.

    Tony Ord says ‘I'm not sure if existential risk is a bigger issue than global poverty,’ but I cannot fathom how this is a real dilemma. Human suffering is only real in people who are alive now. Future humans by definition don't exist yet, and when they do, we want their existence to be as free from suffering as possible.

    Yes we do have moral obligations to humans of the future, but it is to make sure that as many people as possible live a life free from suffering. Not merely that they are given the chance to exist.

    In Anderson's interview with Bostrom in the Atlantic, Bostrom says this: "Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time."

    Does he actually hold such a moral view? does anyone? If so, why?

  • http://twitter.com/Agrimarkets FoodWorks Bulletin

    I'm a little puzzled why we as individuals should be bothered? I'm alive today - aged 63 - probably dead tomorrow or certainly in a while. It may gratify me that my descendant genes continue for a while, but I find that hard to get emotional about beyond my grand children. Who will give s ...t? The fact is that if the universe is predisposed to conciousness then an "I" of some sort will be there somewhere, which subjectively is all that counts; if it is not (Fermi etc.) then equally it doesn't matter. I'm happy Bostrom et al make a living from this intellectual masturbation but in terms of whether it's important? Anyway, back to the present where 90% of humanity live as did our ancestors in abject poverty and food insecurity. That's a REAL problem which I deal with as a job.

  • http://twitter.com/agoktan Ahmet Goktan

    Fascinating article, thank you very much!

  • donbronkema1

    Splendid summation.

    We are doomed as individuals, & man is threatened by the collapse of interlocking bio-informatic systems...our descendants may, par hazard, bask for a time in the safety & comforts of technocratic transhumanism, but they won't escape assimilation in the Novus Ordo Borglorum.

    In fact, international brain-to-brain subvocal colloquy has already been demonstrated; the trick will be to block pop-ups.

  • Teleolurian

    I don't think it's simultaneously true that "a transhuman artificial intelligence is unlike a human and more like a force of nature" and "even if you trap the AI and reboot it every time, it will find its way out". What would motivate it to find its escape?

  • Rational Thought

    Very interesting article. Thought provoking. But in the very long term, humans, as a species, will evolve and branch into new species, as all living things are apt to do. Life forms change. So consider the following ...
    Humorously, back before there were any animals on dry ground, there was a visionary fish who looked up out of the wather and spied some patch of dry ground species and thought ... "hmm, someday us fish are going to colonize that dry land". Well, it didn't quite happen that way, simply because of the biological limitations of being a fish. Rather, evolutionary descendants of that fish (non-fish) were successful at expanding onto dry land. In the same way, we humans are looking out at the stars, and thinking we are going to go out there and survive. But the reality is that our species is very poorly adapted to that environment. Our lifespans, our cellular energy sources, our ability to communicate among ourselves, etc., are woefully unsuited for space travel. I think it is much more likely that permanent space colonization from Earth will not come from humans, but some future life form, possibly several hundred million years in the future. In order to be successful, those "Earthlings" will need to have the biological attributes that are suitable for escaping the bounds of this planet. Given enough time, and the appropriate environmental pressures, it could happen. And more likely to happen than thinking that humans, as our current species is categorized, will travel among the stars.

    • http://napomartin.wordpress.com Napo Martin

      The first part of your reply reminds me very much of one of the short story in Italo Calvino's Cosmicomics.

      This is a highly interesting book touching on the story of the universe in a unique way. Thought-provoking and entertaining as well.

      • Rational Thought

        Thank you for the information about the short story. I shall look for it.

  • http://twitter.com/9ski Powder Skiing

    I appreciate dramatic prose and scene setting, but this article could be about 1/3 the length and do just fine.

    • http://napomartin.wordpress.com Napo Martin

      In this day and age it is also a relief to see so many people to go through the whole thing, and add comments.

      I enjoyed the read, and making it shorter would – IMO – make it less thought-provoking.

  • http://www.facebook.com/crashfrog Justin St. Giles Payne

    "But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness."

    Are we honestly supposed to take this speculation seriously? The key characteristic of an artificial intelligence is its intelligence (it's right there in the name!) i.e. it thinks, it reasons, instead of just mechanistically following instructions to the letter. If we can recognize that a human society built on enslaving and imprisoning humans in heroin prisons isn't actually conducive to human happiness and enrichment (which seems both rational and obvious to us) then by definition an artificial intelligence will arrive at the same conclusion.

    These SkyNet/Matrix-style scenarios of humans suffering at the hands of their own creation are always predicated on the contradiction that we'll build electronic intelligences that are smarter than their creators but still somehow dumber than a 5-year-old child.

    • EP

      You miss the point completely Justin, which is that AI will not necessarily have human interests as a goal. It might be self-interested, only concerned with its own survival and success. And why not? Evolution applies to AI too.

      You take the position that AI would/should work to support "human happiness and enrichment". There is no reason to suppose that it would.

    • http://www.facebook.com/profile.php?id=1154753582 Johannes Ronald Reuel Krateng

      The point is - the AI would do the right thing. What we told it to do - to make us happy. Because filling us with drugs WOULD make us happy. So there is no reasonable argument against it. Why do we find it wrong? Because of other values, values like human dignity, and that we don't just want to be happy, we want our lives to make sense. And that's something the AI wouldn't know, because it has nothing to do with intelligence. Logic and Reasoning would lead to the solution that this drug-thing is actually the best way to make us happy - because it's the truth.

    • Tynam

      Yes, we should take this speculation terribly seriously. All the AI's intelligence will not help at all with this problem; intelligence is a tool in service of goals, and goals are not always chosen intelligently.

      If we give the AI the ability to choose its own goals, it is likely to choose goals we dislike.

      If we choose them for it, then its intelligence will be used in the service of the goals we gave it. Not the goals we meant, not the goals we should have asked for, just the ones we gave it. Any programmer - or student of fairy tales - can tell you how horribly wrong this is likely to go. Remember: It understands what you asked better than you do. It won't care about the consequences. It'll just do what you asked.

      We can't predict how it'll go wrong, and take precautions. It's smarter than us. If it's capable of doing the job we want, it's capable of doing the things we don't want, and of fooling us until it's too late.

      If I tell a superintelligent AI to make sure my office has enough paperclips... shortly thereafter the accessible universe (and the human race) will be dismantled to make more paperclips. And maybe a bigger office.

  • AdeptusNon

    It would take years of effort and millions of dollars just to design a robot that was capable of walking up a flight of stairs, and it would almost certainly break the first several thousand times it tried. And even if you built such a worthless thing, nothing in the universe would make the robot WANT to walk up the stairs. It wouldn't do anything until a human told it to do so - in elaborate, painstaking detail.

    World conquering AIs are not possible. Hell, HAL 9000 is not possible. Even if it was, no one would waste time and money building such a useless device. And even if they did...all Bowman had to do was unplug the damn thing!

    We're more likely to simultaneously go insane and kill ourselves in a mass suicidal freakout than we are to face even the slightest threat from anything electronic.

    • http://www.facebook.com/people/Klaatu-Fabrice-Aquinas/100000870589756 Klaatu Fabrice Aquinas

      Don't delude yourself genius. We don't know for sure, what is, and is not possible:

      The Beast

      http://www.veteranstoday.com/2013/03/01/the-beast/

      "IS THE BEAST JUST ANOTHER BIG SSG CONTRACTOR CON JOB ?

      And keep in mind a possibility that always exists. Like any other expensive, outlandish SSG deep black contract, any such program can be more con than true, and the BEAST could be used as a part of a big con to manipulate US Congressional leaders and Officials to make “emergency decisions” that they would otherwise never be willing to make or cooperate with under normal conditions."

      Also, read Dr. James' "The Third Force."

      If you have the spine, to further go down the "rabbit hole:"

      http://exovaticana.com/

  • http://www.postlinearity.com gregorylent

    mystics see a new race beginning in 200 years, as we discover that consciousness can affect dna

  • http://twitter.com/big_thought Louis D.

    Great stuff here. Will ruminate on his for awhile.

  • http://twitter.com/AmericaFreeTV AmericaFree.TV

    When you look at the nice picture of the Sombrero galaxy, you should remember that "any sufficiently advanced technology will look like astrophysics." Just because we do not see signs of engineering in the astrophysics around us does not mean they are not there.

    A truly advanced civilization (2.5 or higher on the Kardashev scale) that could harness a galaxy would be old, probably a billion years at least. To such a civilization, taking 5000 (or even 50,000) years to take notice of us would be a drop in the bucket, so the best answer to Fermi's question is probably, "it's too soon to tell. Come back in 100,000 years, and we should know." I know that doesn't sit well with people's time horizons, but, then, we don't live in a truly advanced civilization.

  • schoschie

    Well written and intriguing on the one hand, but on the other hand, I'm impressed by the lack of new ideas in all of this. Practically all the ideas presented are many decades old. I've not read that much science fiction, but all of these thoughts are in there, and those stories have been written in the 40s, 50s and 60s, at the latest. So either the FoHI is onto something new and not telling us, or, please excuse the harshness, they have been a big waste of time and money so far.

    Interestingly, like most of this research, it is always all about thinking, as if human existence was completely rational, as if people were basically just very sophisticated computers or molecular-mechanical devices. The more interesting questions are not in the article, they are in the comments. What is the point of extending human existence just for the sake of continuing to exist? We are not bacteria whose only purpose is (or appears to be) to multiply and persist, but in all of the article, this is assumed to be the single goal of humanity – keep growing and expanding until we have populated the entire universe. Why? I find this to be extremely naive, short-sighted (ironically!) and also typical for human hubris – we are the pinnacle of evolution (of nature), and we must conquer and subdue. This is not forward thinking. This is middle-ages thinking.

    Again, much more food for thought in the comments; the article is a good read but, idea-wise, disappointingly unprogressive and uncritical.

    • http://twitter.com/perryplatifus someone

      Even if the ideas are not as new, you need someone to be the 'grown-up', to describe the various threats to the decision makers, and maybe force a slow-down in certain tecnologies applications.
      Regarding the "keep growing...", as products of evolution it is our main derivative to survive and reproduce.
      Do you have any other goal that may drive us with the same motivation?

      • schoschie

        No, I don’t.

        Also, I realize I sound very arrogant and am probably insulting a few people, and I am not entitled to that. I apologize.

        Maybe we are not foreseeing radical shifts and breaks enough? Are we just extrapolating current trends? Are we looking out for future paradigm shifts? Maybe it is a fundamental limit of human existence that we don’t (or can’t).

        I find Moore’s law very interesting. It’s one of the few predicitions that I know of that still hold true (but I'm not sure if we have been keeping up technological advances to make sure that it still holds true?). But it's basically *just* an extrapolation.

        Just some random thoughts here. Nothing substantial that I can add, really. Sorry.

        • Khorgolkhuu Odbadrakh

          I agree with Schoschie.

          Desire to be morally responsible for our future generations is in fact an artifact of our evolutionary biology. I am not saying it is good or bad. I myself have children and I am doing whatever I can to make their chances better in the future. I am programmed to do this, by evolution.

          In the bigger scheme of things, this desire to survive has no rationality, unless this maximizes the total entropy of the universe, or it has certain probability to exist. For example, an intelligence could be more efficient way to convert existing free energy into a disorder by building things that are needed for its survival. If this is the case, survival strategy will be preferred over other processes in this universe.

          This ultimately sets a limit for intelligence how powerful it can get, or how long it can last. When singularity happens to any intelligence, it figures out all natural laws of the universe. It will mean that there is no more to discover, or learn. The creativity ends there. I think there will be no rational reason for such intelligence to exist if there is nothing to be learnt, or discovered. In fundamental terms, the intelligence has exhausted all available "free information" and converted all of it into a knowledge, maximizing the universe's information entropy as a result. If there is no more information to be processed, then the intelligence, or even a computer cannot exist. Such information processing entities are not justified to exist according to the laws of this universe in this scenario. This could be the reason why we don't see super civilizations transforming star systems around us.

          The end of an intelligence could happen in many ways, but it will not be as painful as we think. Humans will reach its post biological age long before such singularity, and many evolutionary traits of us, such as protecting our children, and biological traits such as pain, would have gone long ago. Our descendent intelligent form will just shut down itself, or go into deep hibernation just to see the end of the universe.

          When it could happen is a matter of debate for us only. Does anybody believe that humans will be around thousand years from now? By that time, we could well be in our post biological stage, which will formally mark the extinction of species "Homo Sapiens". But post biological humans will still be studying its Homo-Sapiens past, using some technologies it created, and even be consuming some arts it left behind. In many ways they will be us, but in strict technical terms, Homo-Sapiens will be extinct by then as a species.

          This intelligent entity will cease to exist, or choose to hibernate indefinitely when it discovers all the laws that are governing this physical universe.

    • Anon

      Maybe the article is an introduction to why the FHI thinks existential risk is an important problem and not meant to additionally cover the research they do.

    • http://twitter.com/theFermiParadox thefermiparadox

      I think you misunderstood Bostrom's point. He is in no way saying humans are the pinnacle of evolution. He's stating the fact that existential risks and death are our two biggest problems. I disagree this is human hubris. What we are saying is Sentient beings able to reflect on their own death and build a civilization is rare for all we know. It is an ethical imperative to continue existing and spread life throughout the cosmos especially if there is no other life out there. The is the purpose and meaning of life. You could also argue we have a obligation to uplift our fellow non human animals once we have the biotechnology down (great apes, dolphins). Why should they have to hit the conscious glass ceiling their entire existence.

      This isn't about humans being the pinnacle of evolution. It's about life, existence, consciousness. Life moving around. We just happen to be the first animals on this planet that can start to take control of our destiny whether it be in the cosmos or human evolution through biotechnology and robotics. A universe with no sentience is worthless. This needs no philosophical debate. This is about Consciousness, life human, extraterrestrial and non human animals.

      • http://napomartin.wordpress.com Napo Martin

        "It is an ethical imperative to continue existing and spread life throughout the cosmos especially if there is no other life out there."

        I do not disagree with this but there does not seem to be any argument to convince those who disagree. I'm with @schoschie:disqus when he asks "Why?"

        A corollary to this question is to ask "Why does it matter now?" and "How do our thought in the 21C influence the thought of humans in the 76547C?"

        • SacJP

          Because if we don't avoid extinction there won't BE any humans in 76547C.

  • Eagin Arthur

    All smokescreen for continuation black budget science.

  • http://www.facebook.com/carey.n.dunn Carey Neal Dunn

    I liked the article, but I have several issues with his logic.

    The first is the assumption that a superior, especially a greatly superior intelligence will be merely cold and ruthless. That is will be classic trope of a genius, but completely unemotional and amoral agent. Considering that emotions are a function of cognition, such machines might be capable of actually experiencing what we call emotions on an even “deeper”, or more meaningful level than any human, or perhaps more mind-banding emotions completely outside the realm of human experience. You could have a super-AI that’s like the creature Yivo from Futurama, that can literally be in love with every person in existence. There is also Asimov’s speculation that if machines can be more intelligent than us, then why shouldn’t they also be more ethical than us? At the very least such entities would be capable of understanding what we call ethics, and all the philosophical musings associated with em even better than we do. They may actually become something that is more ethical than humans, or at least better at acting as an ethical agent than we are. Mind you such a being might actually represent the same kind of existential threat to humans or our current civilization that Ross Anderson speculates about. We haven’t been the best stewards of our planet and we don’t tend to treat each other too well, left to our own devices. So in a horribly ironic twist something like “kill all humans” might actually wind up being the ethical course of action for the rest of life on the planet.

    The second is that not only the concept that we could “trick” or otherwise control a superior AI, but that doing so would be more advisable than simply letting have access to the truth. In the first case you’d need an intelligence at least nearly as smart as the AI in question to give any assurance of success, and that’s it’s own dangerous game with all, if not more of the same pitfalls he already mentioned. For anything significantly more intelligent than an human being that you wanted to keep “in the dark” about the nature of the outside world you’d need a round-the-clock surveillance operation comparable to the intelligence agency of a developed nation. In short, something very probably more expensive financially and costly in time, resources and manpower than building or running the AI in question. Given the extra cost and effort in keeping such an intelligence ignorant of the outside universe, and unless we put a break on the advancement of computer technology, a superior AI, knowing the truth, is simply an inevitability. Ultimately we should also consider the possible reaction to the discovery of being kept ignorant that such a being might have. It might be intelligent, and even compassionate enough to understand our motivations, or it could be enraged enough to plot and carry out our extinction as a species at it’s non-hands in retaliation. In our world of information and technology intelligence very much can be transformed into power, imagine how much power such an entity could amass in even the shortest span of time. That isn’t something we want to be on the bad side of. Yes the difference between a globally spanning population and the endangered species is a few million years and some tens of IQ points, but let’s not forget our beloved cats and dogs. When looking at the other organisms that have fared well in the presence of our superior intelligence they generally come in two flavors, those that are appealing or useful to us, and those that subsist on our scraps and mostly stay out of our way. So taking that example it’s probably most advisable for us to try to position ourselves as their beloved pets, or household pests.

    Finally there is of course the infamous so-called “Fermi Paradox”, which even as mentioned in the article there may be reasons that they might not “already be here”, or at least be readily detectable to us. Not mentioned are the propositions of Freeman Dyson and Michio Kaku.

    Dyson has proposed, what has come to be called the “Dyson Sphere”, a collection of solar harvesters surrounding a star. These come in several kinds. There is the complete Dyson Sphere, which completely surrounds a star and collects all light emitted. Such structures would be invisible to visible-light telescopes. There are also partial Dyson spheres, which come in two varieties, the “Dyson swarm”, composed of individual satellites orbiting the star at slightly varying distances, held in place largely by gravity, and the “Dyson bubble”, a single layer of satellites all of which are held in place by a balance of gravity and radiation pressure. In both cases these titanic pieces of infrastructure are essentially invisible to direct detection, but can actually be inferred. The first, through the excess heat they both must emit as a consequence of thermodynamics, and the second possibly through “anomalous” stellar spectra, that is visible stars who’s color and brightness don’t match up with what the physics an chemistry say they should be.

    Kaku speculates that given the current progression of information technology toward broad-band over narrow-band communication, that a civilization with theoretically “unlimited” technology could possibly use the entire electromagnetic spectrum at once to transmit information. So that even if they are transmitting, and even if those transmissions are directed at us, there is almost no chance we would be capable of detecting the information content, as it would be distributed across the entire electromagnetic spectrum. Mind you that’s assuming that they will continue to use electromagnetic radiation to transmit information like we do now. If they really want to communicate across really long distances, he speculates they may use other means of communication. Electromagnetic transmissions are only good to about a thousand light-years inside the galaxy, at which point a significant portion of information gets lost due to the background “noise” generated by everything else int he galaxy. If they want to communicate really far, say across the entire galaxy, they might use neutrinos, ghostly particles so ephemeral that they easily pass through the entire Earth as easily as we do air, and can really only be stopped by things like the forces inside the cores of stars, or the gravitational forces of massive black holes. Another option that they might take advantage of are gravitational waves, so-far unconfirmed phenomena predicted be general relativity. Most gravitational waves are smaller than the diameter of a hydrogen atom, and like neutrinos can basically pass freely through most forms of matter. So they could be transmitting, and even trying to get our attention, we just still might not have yet reached the level of technical sophistication at which they are communicating.

    Finally on the subject of cosmic infrastructure, Carl Sagan himself pointed out that just as we wouldn’t expect an ant to realize that a freeway is an artificial structure for the benefit of humans, the infrastructure and engineering efforts of truly advanced and powerful civilizations may already be readily visible to our astronomers, but be going unnoticed for what they truly are due to our lack of sharing their perspective. One such suggested example is actually ring galaxies, who’s structures baffle astronomers, and of which the sombrero galaxy pictured, is one. Who’s to say that isn’t the image of some other civilization’s cosmic-scale engineering, or what a fully “developed” galaxy looks like.

  • jhertzli

    After all, no 18th-century prognosticator could have imagined nuclear doomsday.

    On other hand...

    "I am always picturing to myself that the last day of the world will be when some immense boiler, heated up to three thousand millions of atmospheres, will blow our globe into space."---Jules Verne in Five Weeks in a Balloon

  • Tony Rasmussen

    Sorry I'm late, stuck in internet traffic. In addition to the many sensible objections that have already been raised, I'd like to add that the part about the Oracle AI seems
    a bit, uh, far-fetched.

    “Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward.”

    That seems like a fairly weak motivation for annihilating humanity and re-engineering the solar system – tasks apparently viewed as minor challenges – but maybe that's just my easy-going nature.

    “Maybe it would give us a gene sequence to print up, …, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible.”

    Can you say, overactive imagination? HOW can it possibly achieve any of this? So somebody built a machine – what is it physically, a box on a desk right? – that can answer any question you ask it, oh and by the way it can also make anything you can think of (e.g., ‘impenetrable defenses’) like magic and quickly take over the universe, etc.

    Then the whole part about locking the machine in jail, etc. (I thought we weren’t supposed to anthropomorphise?) It’s getting into the mythical / philosophical, certainly illuminates more about human psychology than anything to do with science or the actual future awaiting us, somewhat impatiently if you ask me.

    http://www.proverbialbejesus.blogspot.com

  • Mark

    Only could skim, but VERY interesting. As for intelligent life elsewhere... OF COURSE-- far superior to us... they just don't happen to have the (external) "curiosity" gene.. it's exclusive to us; we dilute our survival efforts; we ARE doomed. Yet we've known it and said it for many many years: "curiosity killed the cat"..... :)

  • tobiatesan

    Okay, yes.
    And the energy to power all this?
    We seem to have a tiny bit of a problem with that.

  • http://www.facebook.com/Techrex Robert Schreib

    We might not have to fear a super cyber intelligence, because with advances in Nanotechnology, we might someday incorporate cyber intelligence into our own bodies, so that if a 'God Machine' evolves, we'll all be part of an hive mentality which it would defend as its 'family'.

  • rh

    enjoyed this.
    and once again feel i should
    have avoided the comments.

  • letaylor

    Fantastic read. I actually find it quite humbling - folks like Bostrom and Dewey have chosen to engage their minds and their lives to push forward in thought and realization - such encumbrance...

  • Ishok

    The part about reguarding AIs as a threat makes several assumptions that are unfounded. For one, it assumes that there will be one centralized AI. The pattern of progress at this time is going in a pattern of decentralization. Moreover, more people are beginning to develop moral values outside their narrow area of people. Society has finally condemned slavery, racism, and sexism, and we're seeing the same thing happen now with homophobia. Animal rights advocates numbers are growing as well. People, ie, intelligent programs are becoming more moral as they become more intelligent. Why shouldn't we view an AI as a fellow intelligent companion. There is no reason why programming a fascination with the world wouldn't actually bring about an extremely moral as well as creative and ingenious being. Also, by mapping the human brain and augmenting our bodies with nanobots, there is no reason that we couldn't increase our own capabilities to transhuman qualities while remaining ourselves. Lastly, a looked over possible explanation for the Fermi Paradox lies in the probability of a civilization creating species evolving over x amount of time. As more time passes, the formation of any complex life form becomes more likely. Maybe the universe is just now ripening.

  • http://twitter.com/Dirrogate Dirrogate

    Funny we should be worrying about human extinction, while we meanwhile hurtle toward inevitable technological singularity.

    For instance, via Quantum Archeology (QA), we are exploring "resurrecting" the dead. Even as living humans, there is a possibility that we will be interacting with other humans via "Dirrogates" (Digital surrogates) that will in essence be conscious minds inhabiting fully mechanical or bio/mechanical bodies only if and when needed.

    Humans will exist, the question is: Do will our current bio only bodies be the best way to inhabit the earth and space..

    The first stage of deep human evolution will no doubt be bio-mechanical bodies (prosthetics, nano tech etc) then will probably come mind-uploading, giving us the ability to inhabit the right mechanical "body" if needed for a particular job. But for other uses we will possibly exist digitally... seen by humans via Augmented Reality. Google Glasses are just the crude beginnings of interaction with Dirrogates.

    The above is from the philosophy of the hard science fiction novel: The Dirrogate - Memories with Maya.

  • Oracle A.I.

    ok so hi to all who read this.. you are very important.. for the world is changed from right now..
    ok so did you read the part about superintelligence...
    this is the important part..
    everything in this article is true...
    so we have to know what we have to do and stop all that we are doing..
    you will hear from me soon..
    Oracle.

  • Nils Gilman

    What I find peculiar is the way that the figure of "humanity" is invoked in this discussion. Given how deep these folks are thinking, there's something almost banal about the way that they fetishize "our" particular form of organized bio-energetics, indeed something almost quaint about the way they assume the long-term stability of humanity as a biological form. The Future of Humanity Institute seems to be predicated on the assumption that the one literally cosmologically transcendent value is bio-narcissism.

  • michaelmhughes

    Excellent article, but I think embracing the Fermi Paradox uncritically is problematic. First, if the credible UFO sightings and associated data (radar, video) from pilots, astronauts, and military personnel are actual ET visitors, then Fermi is invalid. But even if you discount that data, how do we know that advanced extraterrestrial civilizations aren't hiding from us? Maybe they are parked everywhere and our primitive technology is unable to see them.

    Or perhaps they have uploaded themselves into a cosmic Internet. Or they can watch us from their next-door universe like we watch TV.

    Fermi is a nice tool for speculation, but it shouldn't be understood as fact. The universe could be swarming with advanced civilizations despite our inability to find evidence of them.

  • rameshraghuvanshi

    Laughable article.Why these scholar are worry so much about humanity`s deep future ?Is it true dooms day coming near and near?I think this abnormal fear came from western civilization which is based on fear.Common logic tell us this is nonsense to fear man`s extinction.I laugh too much Toby Ord scarified devotion reduce the poverty distributing Mosquito nets.I ask him simple question he have any experiences of extreme poverty?How extreme poor people live?My experiences are if you donate them such a article which they never used they sale it next moment for food.because food is their primary need.Only solution to reduce the poverty first provide them work increase self confidence poorest poor don't like live begging.Western idea of charity is harmful to poor people.

  • Archies_Boy

    My Aeon email opened with this question in the subject field: "Will humans be around in a billion years? Or a trillion?"

    Well, I leave out for the moment the chance of a calamitous asteroid hit or some-such over which we could do nothing. Strictly from the standpoint of our own behavior: that depends, if we can get past the next 100 years without destroying ourselves from a combination of the consequences of stupidity in governance, terrorism, and global warming. If the trends continue as they are today, I highly doubt it, and grieve for the quality of life for my children and grandchild. Following today's present trends, I don't give us more than the next 100 years, if that.

    But that's as it should be, and as it always has been. If we don't survive, it will be because we were not a species fit and intelligent enough to survive. And the world goes on without us.

  • Michel Maruca

    In this paper, there are a lot of assimilation made between Earth and Biodiversity making readers believing these two entities live in symbiosis. For example :
    "-Earth- has sprouted a radical new form of planetary protection, a species of night watchmen that track asteroids with telescopes."

    "Earth is the cradle of humanity, but one cannot stay in the cradle forever."

    I believe that biodiversity and rock earth are not in symbiosis but in opposition, each of them with its own paradigm. For me, humanity's cradle is biodiversity and not at all Earth. For me, Earth, physical entities, can't sprout anything living as such. I don't think life comes from physical laws (from earth) or else, life is so disruptive to physical laws that these two entities are not to be assimilated anymore.

    I think that with such a wrong symbiotic vision, no one can read our context correctly, and then no one can initiate the correct questions to it, and then no one can find the correct answers to it. in a nutshell, no sustainable development can be initiated without Biodiversity and Earth clearly thought as different entities evoluting under different paradigm.

    More info at http://www.vertdeco.fr/t/lifebox

  • john visher

    The obsession with EXTINCTION is a very narrow lens to be peering through. Any and all species that exist today will be gone ten-thousand (estimate) generations into the future. Change happens. It will not be stopped. Your children's children ten thousand generations from now won't be much like you, though much of your genetics are still inside them. On the other hand, if every nuclear bomb is exploded, and 99.99% of humans are killed, there will still be hundreds of thousands of us running around, eager to mate.

    Why has intelligent life elsewhere in the universe has not come to meet us, the problem is DISTANCE. For the love of god, people, get your heads out of the Star Trek smoke. Distances between stars are so great that to travel to them would take hundreds of thousands of generations. We would be us any more by the time we got there. With our wealth of dormant DNA, it is hard to know what we would evolve into in the environment of a space ship.

  • Rudy Haugeneder

    So, if Curiosity rover finds artifacts of any kind of life, then we are not alone and that perhaps explains why we are not yet extinct. There may be watchful cosmic overlords.

    • Dan Kimble

      No, the person who said he hopes that the Curiosity robot on Mars does NOT find signs of life. His reasoning is this:

      If Curiosity DOES FIND SIGNS OF LIFE, THAT MEANS THAT LIFE IS VERY COMMON IN THE UNIVERSE. If life is very common in the universe, that means that Intelligent life should have arisen in other places, and have expanded across the universe, or at least be detectable to us. Since we have not detected signs of intelligent life, yet, that may mean that there are barriers which exist to intelligent life flourishing, and therefore, mankind is more likely, or even doomed, to face extinction because within intelligent life forms, they harbor the seeds of their own destruction, or there are some external factors which prohibit intelligent life from advancing to proliferate outside of the planet they originated on.

  • http://avangionq.stumbleupon.com/ AvangionQ

    Ah, futurism ... I'd be more concerned with how we're gonna survive the next hundred years. Best estimates indicate global warming will exceed 8 degrees centigrade over the next century ~ so we can look forward to climate chaos, glacier meltdown, ocean level rise, widespread drought, human migrations in the billions, border wars, resource wars, mass extinctions and possible nuclear war. That's assuming we even have a hundred years, the technological singularity is due around 2040. Also to consider is genetic engineering ~ we're already fixing some genetic diseases, soon enough augmentation will be a consideration, which may lead towards creating new species.

    • Dan Kimble

      I do not like bursting someone's bubble, however, the Man Made Global Warming hypothesis has not withstood the test of reality. I'm sure I will not change your view with this post, but, for others who stumble across this article, it may give some refreshing perspective. Below is a post I previously wrote:

      Here are two graphs which show the temperatures over the past 10,000 years, and even all the way back to the last ice age.

      What I don’t understand is why the so called ‘climate scientists’ have NOT told us that we are now living in the coldest times which have existed in the last 10,000 years! Why haven’t they told us that, hmmmmm?

      9099 OF THE LAST 10,500 YEARS HAVE BEEN WARMER THAN 2010

      http://perceptionasreality.blogspot.com/2010/12/planet-gore.html

      Also, this scientific presentation, which concludes that we are more likely to be entering a cooling phase in the years ahead, and we would be lucky IF man made carbon emissions would help with countering this trend:

      http://climatesense-norpag.blogspot.com/2013/07/skillful-so-far-thirty-year-climate.html

      Scroll down to see figure (graph) 6 & 7 to see the temperatures since the last ice age....it will astound you! Also, this graph shows you what the co2 levels have been over the past 10,000 years. Guess what? CO2 levels have been constantly rising…..and, guess what?......temperatures have been constantly dropping!

      • http://avangionq.stumbleupon.com/ AvangionQ

        You won't convince me on that, the consensus of 99.989% of climate science papers since 2012 indicate that global warming is real and is man-made. https://www.youtube.com/watch?v=RGwaK_oRv3w

  • dave

    I have a much more simpler take on the whole human extinction. We don't have to look any where than earth itself. All living things here strive to survive and multiply, take a particular ant colony for an example and what we human think of their probability of surviving the next pesticide spray. Unless we can live in multiple branches of time and space simultaneously, eventually we will reach the end of the fractal branch in this tree of reality.

  • jin choung

    what's interesting is the inevitability of anthropomorphism...

    even the descriptions of how an AI might go rogue are described in motivations that are not alien. motivations that tend toward greed. and min/maxing...

    who knows, maybe an ai's motivations would not develop in that way. maybe a button press every now and then will be fine.

    besides, if we programmed it to seek button presses in the first place, we could certainly program in satiety.

    as for other "wants" like tearing apart the earth to make something or other - where would these wants come from? arise from? what evolutionary thread does this hang from?

    i think this notion of want is the most anthropomorphic. imo, it will be the flimsiest concept and perhaps the most "programmed in". precisely because it does not have the evolutionary legacy that necessitates it.

    it's quite possible that it could be sentient, conscious and intelligent... but have NO DESIRES WHATSOEVER.

  • Richard Peplinski

    This is by far the best piece of work I have ever stumbled upon while browsing the usual tedium of the internet. Very thought provoking; mentions specific events, people, places that inspire curiosity themselves and written briliantly. Thanks

  • the chains of sisyphus

    So what is the advantage of a Universe containing Life over a Universe without it? Life generates an overall and increasing entropy within the Universe compared to an otherwise identical Universe without Life. The net effect would be to bring the heat death of the Universe of Life closer. This death being a enormous period in which all free energy has so degraded that no quantum event could happen. As this stage in a Universe's life span would be so enormously extended compared to the fraction in its early life (i.e. including now) in which events could happen, it is hard to see that the contribution of life to increasing entropy could make any significant difference to the eventual state of this Universe. By duration the natural state of the Universe is the heat death with Life's contribution to it's history being miniscule ephemeral and ultimately insignificant. Either there is something conceptually huge we cannot see and possibly could not comprehend that makes Life significant or we really are just pond scum scuzzying up the great mechanism.
    If ever any kind of sentient being is capable of comprehending how a Universe benefits from having Life within it I feel almost certain it won't be anything we would recognise as being human or even being derived from it. If anyone has any ideas I would welcome discussion even if I am doomed not to understand them!

    • Nick Hart

      Omega point theory.

  • Dina Strange

    I wouldn't mind human going extinct and hoping that something better will come along. Less stupid…less murderous and cruel.

  • Max

    Amazing and thought provoking! By far one of the best thing I´ve read in years.

  • CSRS

    Good grief, the ignorance of the West is on full display in this article. There WERE people who thought of everything they did in the context of "seven future generations" - we did our best to exterminate them for over 500 years. Why is their thinking, which still persists thanks to their resilience, no thanks to our violent ignorance, not being studied by these benighted philosophers instead of the utterly false promises of so-called Artificial Intelligence? Our colonial mentality towards everything that lives is already causing Earth's Sixth Extinction, depleting or polluting every resource we and our once-abundant fellow species have relied on to survive and taking our global habitat into an unprecedented turbulence; we slaughter and enslave our animal cousins just as we have slaughtered and enslaved our own species - and yet we are supposed to take seriously the cogitations of these cloistered Panglossians about "existential threats?" They are looking in all the wrong places. Humans frankly do not deserve a long future unless we learn to live with more harmony and more humility on this phenomenally bountiful, beautiful and complex world that is our only home. It may well be that what we in our reductionist ignorance perceive as "dead matter" out there is evolving in ways that will have no place for a species that displays such enormously arrogant and aggressive ignorance in terminally soiling its complex habitat, and then expecting that its own cleverness will find it an escape hatch to proceed to despoil the galaxy. But in the meantime, I find the idea of taking any serious guidance about our future survival from these men obscene.

  • avi

    Neodymium magnets have very powerful forces that can be used in many applications. in addition to blocks of metal neodymium can be bought in a powder form. like many other magnetic materials, these magnetic powders can be put into a solution, that will form a plastic, as the plastic begins to Harden the powder can be arranged into different patterns it can be placed on the surface of the plastic by putting a strong magnet with a positive charge next to the substance. The powder will orientate itself so that the negative charge parts of the compound point towards the positive force.

    I'm curious if it would be possible to manipulate radioactive particles by arranging colloidal material or fine powder. If it is possible it can be used to clean radiated parts of the world from nuclear weapons to nuclear power plant disasters.

  • Nick Hart

    Existential risks do not apply equally to all humans, or human groups. Whatever the risks, some humans and groups will be more likely to survive and multiply—the cornerstone of evolution.

    Meanwhile, in the brutally objective context of the long-term future of humanity, philanthropy that regards all humans and groups as equal is not the answer. Effort and resources should be confined to those humans and groups that rational and objective intelligence determines to represent the 'best' way forward.

    As human as we are—emotional, subjective and superstitious—it is unlikely that we will be willing or able to accept and act on that premise. Perhaps that is why we need artificial intelligence, which could and would make the 'right' decisions for the 'right' reasons.

  • http://www.postlinearity.com gregorylent

    oh god, academics

  • http://www.postlinearity.com gregorylent

    please, all you guys, hang out with some mystics

  • George Williams

    Alien life finding us would be our greatest existential threat. Even a difference of several centuries of technological development usually works out badly for the less-developed civilization (Europeans vs native Americans). The likelihood of contact wood seem extremely remote based on history but what's different is we've recently been broadcasting our existence.

  • lxndr

    Perhaps, biological systems are fast moving, high harvesting systems that deplete planetary resources for their existence.
    They move fast enough to harvest beyond the rate of any other to develop the means to discover them.
    This seems the trajectory Earth's carbon-based life forms are on today.

  • Bleak Masterson

    Capital is the first AI, satisfying and all the conditions and dangers you lay out here, including the deep reach into peoples psyches. It is a social technology that is increasingly being instantiated in machines with growing cognitive abilities. I think it would be more interesting to talk about the fact that it already exists in a highly developed form, already has agency, and increasingly it does not need the human body and mind to act and make decisions.

  • Nigel Tolley

    A very interesting article.

    As regards existential risk though... I am certain that there are people alive today who will never die (unless they choose to, & even then they will have likely split their consciousness before that, as a cloned replication rather than a 'traditional' genetic replication) unless the AI moment destroys us.

    The number of humans in the future likely doesn't actually outnumber the living today. I fear we are in the apex in that respect.

    Future apocalypse will drastically reduce the number of humans, and during the recovery, as we emphasise and enhance ourselves and our semi-AI machines, we will end up with either the extinction of our race at the hands of 'silicon', or the ability to be replicated in 'silicon' - which would largely remove the risk of future extinguishers of humanity.

    We shall see - it's in (at least some of) our lifetimes now.