Robot evolution

Hod Lipson’s artificial organisms have already escaped from the virtual realm. Now he wants to send them out of control

by 2,500 words
  • Read later or Kindle
    • Kindle
A quadrupedal robot used to help evolve gaits.  Courtesy Cornell Creative Machines Lab

A quadrupedal robot used to help evolve gaits. Courtesy Cornell Creative Machines Lab

Emily Monosson is an environmental toxicologist at the University of Massachusetts Amherst and the author of Evolution in a Toxic World.

In a laboratory tucked away in a corner of the Cornell University campus, Hod Lipson’s robots are evolving. He has already produced a self-aware robot that is able to gather information about itself as it learns to walk. Like a Toy Story character, it sits in a cubby surrounded by other former laboratory stars. There’s a set of modular cubes, looking like a cross between children’s blocks and the model cartilage one might see at the orthopaedist’s – this particular contraption enjoyed the spotlight in 2005 as one of the world’s first self-replicating robots. And there are cubbies full of odd-shaped plastic sculptures, including some chess pieces that are products of the lab’s 3D printer.

In 2006, Lipson’s Creative Machines Lab pioneered the Fab@home, a low-cost build-your-own 3D printer, available to anyone with internet access. For around $2,500 and some tech know-how, you could make a desktop machine and begin printing three-dimensional objects: an iPod case made of silicon, flowers from icing, a dolls’ house out of spray-cheese. Within a year, the Fab@home site had received 17 million hits and won a 2007 Breakthrough of the Year award from Popular Mechanics. But really, the printer was just a side project: it was a way to fabricate all the bits necessary for robotic self-replication. The robots and the 3D printer-pieces populating the cubbies are like fossils tracing the evolutionary history of a new kind of organism. ‘I want to evolve something that is life,’ Lipson told me, ‘out of plastic and wires and inanimate materials.’

Upon first meeting, Lipson comes off like a cross between Seth Rogen and Gene Wilder’s Young Frankenstein (minus the wild blond hair). He exudes a youthful kind of curiosity. You can’t miss his passionate desire to understand what makes life tick. And yet, as he seeks to create a self-assembling, self-aware machine that can walk right out of his laboratory, Lipson is aware of the risks. In the corner of his office is a box of new copies of Out of Control by Kevin Kelly. First published in 1994 when Kelly was executive editor of Wired magazine, the book contemplates the seemingly imminent merging of the biological and technological realms — ‘the born and the made’ — and the inevitable unpredictability of such an event. ‘When someone wants to do a PhD in this lab, I give them this book before they commit,’ Lipson told me. ‘As much as we are control freaks when it comes to engineering, where this is going toward is loss of control. The more we automate, the more we don’t know what’s going to come out of it.’

Lipson’s first foray into writing evolvable algorithms for building robots came in 1998, when he was working with Jordan Pollack, professor of computer science at Brandeis University in Massachusetts. As Lipson explained:
We wrote a trivial 10-line algorithm, ran it on big gaming simulator which could put these parts together and test them, put it in a big computer and waited a week. In the beginning nothing happened. We got piles of junk. Then we got beautiful machines. Crazy shapes. Eventually a motor connected to a wire, which caused the motor to vibrate. Then a vibrating piece of junk moved infinitely better than any other… eventually we got machines that crawl. The evolutionary algorithm came up with a design, blueprints that worked for the robot.
The computer-bound creature transferred from the virtual domain to our world by way of a 3D printer. And then it took its first steps. The story splashed across several dozen publications, from The New York Times to Time magazine. In November 2000, Scientific American ran the headline ‘Dawn of a New Species?’ Was this arrangement of rods and wires the machine-world’s equivalent of the primordial cell? Not quite: Lipson’s robot still couldn’t operate without human intervention. ‘We had to snap in the battery,’ he told me, ‘but it was the first time evolution produced physical robots. It was almost apocalyptic. Eventually, I want to print the wires, the batteries, everything. Then evolution will have so much freedom. Evolution will not be constrained.’

In the late 1940s, about five decades before Lipson’s first computer-evolved robot, physicists, math geniuses and pioneering computer scientists at the Institute for Advanced Study at Princeton University were putting the finishing touches to one of the world’s first universal digital computing machines — the MANIAC (‘Mathematical Analyzer, Numerical Integrator, and Computer’). The acronym was apt: one of the computer’s first tasks in 1952 was to advance the human potential for wild destruction by helping to develop the hydrogen bomb. But within that same machine, sharing run-time with calculations for annihilation, a new sort of numeric organism was taking shape. Like flu viruses, they multiplied, mutated, competed and entered into parasitic relationships. And they evolved, in seconds.

These so-called symbioorganisms, self-reproducing entities represented in binary code, were the brainchild of the Norwegian-Italian virologist Nils Barricelli. He wanted to observe evolution in action and, in those pre-genomic days, MANIAC provided a rare opportunity to test and observe the evolutionary process. As the American historian of technology George Dyson writes in his book Turing’s Cathedral (2012), the new computer was effectively assigned two problems: ‘how to destroy life as we know it, and how to create life of unknown forms’. Barricelli ‘had to squeeze his numerical universe into existence between bomb calculations’, working in the wee hours of the night to capture the evolutionary history of his numeric organisms on stacks of punch cards.

Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead’

Just like DNA, Barricelli’s code could mutate. But he had some unusual ideas about how evolution worked. In addition to single-point mutations, he believed that evolution leapt forward through symbiotic and parasitic relationships between virus-like entities — otherwise it just wouldn’t be fast enough. Maybe, he thought, cells themselves first arose when virus-like creatures started slotting together, like Lego pieces. ‘According to the symbiogenesis theory,’ Barricelli wrote, ‘the evolution process which led to the formation of the cell was initiated by a symbiotic association between several virus-like organisms.’

So far, this doesn’t appear to be the way things happened; in fact, some researchers believe that viruses first emerged after cells. But a few of Barricelli’s findings were not too far off the mark. Once he had ‘inoculated’ MANIAC, it was minutes before the digital universe filled with numerical organisms that reproduced, had numerical sex, repaired ‘genetic’ damage and parasitised one another. When the population lacked environmental challenges or selection pressures, it stagnated. In other cases, a highly successful parasite would cause widespread devastation. These patterns of behaviour are typical of living things, from the simplest cells right up to human beings.

The overall shape of his simulation matched life quite well, and is particularly reminiscent of viruses. Viruses are indeed parasitic: they are symbionts, which means that they need to take over the living cells of other organisms to reproduce; taken by themselves, they aren’t much more than simple DNA or RNA mechanisms surrounded by a coat of protein. And like all living things, viruses inevitably mutate during replication. But they also engage in some genetic give and take. As they weave in and out of host cells, they might steal host genes or leave their own genes behind (by some estimates, eight per cent of our human genome comes to us by way of viruses). Some even swap gene segments with other viruses, and that speeds things up quite a bit.

When an influenza virus evolves through simple mutation and selection, we call that antigenic drift. Each fall, those of us who submit to annual flu vaccines do so in large part because of drift. But every once in a while, an influenza A virus makes an evolutionary leap — swapping a large genome segment with a very different strain and undergoing what is called an antigenic shift. The flu viruses we fear the most — the novel, pandemic strains — are often the products of such shifts. The newly emergent H7N9 avian flu virus is believed to have undergone an antigenic shift, enabling it to infect humans; to date, it has infected 132 and killed 39 in China. To pick a more explosive example, the Asian flu outbreak of 1957, another product of antigenic shift, wiped out between one and four million people worldwide. Evolvable computer programs also swap code as they engage in genderless algorithmic sex. As with viruses, the ability to make these exchanges boosts a program’s evolvability.

And yet, as close to the real thing as Barricelli’s digital organisms came, they were just numeric code: they had a genotype but no phenotype, no bodily characteristics for evolution to sift through. Life on Earth is about tools that solve problems — a beak capable of cracking a tough nut, the ability to digest milk, a robotic leg that can take a step in the right direction. Natural selection acts on the hardware; the software, be it DNA or numeric code, just keeps score. Barricelli’s creatures might have behaved like living organisms, but they never escaped the computer. They never got the chance to take on the outside world.

Not many people would call creatures bred of plastic, wires and metal beautiful. Yet to see them toddle deliberately across the laboratory floor, or bend and snap (think Legally Blonde) as they pick up blocks and build replicas of themselves, brings to my biologist mind the beauty of evolution and animated life. Most striking are the pulsating ‘soft robots’ developed by a team of students and collaborators. Though they have yet to escape the confines of the computer, you can watch in real time as an animated Rubik’s Cube of ‘muscle’, ‘bone’ and ‘soft tissue’ evolves legs and trots exuberantly across the screen.

The more like us our machines become, the more dangerous and unnerving they seem

One could imagine Lipson’s electronic menagerie lining the shelves at Toys R Us, if not the CIA, but they have a deeper purpose. Like Barricelli, Lipson hopes to illuminate evolution itself. Just recently, his team provided some insight into modularity — the curious phenomenon whereby biological systems are composed of discrete functional units, such that, for example, mammalian brain networks are compartmentalised. This characteristic is known to enable rapid adaptation in DNA-based life. ‘We figured out what was the evolutionary pressure that causes things to become modular,’ Lipson told me. ‘It’s very difficult to verify in biology. Biologists often say: “We don’t believe this computer stuff. Unless you can prove it with real biological stuff, it’s just castles in the air”.’

Though inherently newsworthy, the fruits of the Creative Machines Lab are just small steps along the road towards new life. Barricelli always skirted the question of whether his own organisms were alive, insisting that they could not be defined as one thing or the other until there was a ‘clear-cut’ definition of life. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead,’ he said, ‘but beneath the surface it’s not simple. There is a lot of grey area in between.’

How you define life depends on whom you read, but there is a scientific consensus on a few basic criteria. Living things engage in metabolic activity. They are self-contained, in the sense that they can keep their own genetic material separate from their neighbours’. They reproduce. They have a capacity to adapt or evolve. Their characteristics are specified in code and that code is heritable. The robots of the Creative Machines Lab might fulfil many criteria for life, but they are not completely autonomous — not yet. They still require human handouts for replication and power. These, though, are just stumbling blocks, conditions that could be resolved some day soon — perhaps by way of a 3D printer, a ready supply of raw materials, and a human hand to flip the switch just the once. Then it will be up to the philosophers to determine whether or not to grant robots birth certificates.

I’ve been relating some of these developments to friends, and once they get over the ‘cool’ factor, they tend to become distressed. ‘Why would anyone want to do that?’ they ask. We have no real experience with new life forms, particularly of the cyber type, though they abound in books and on screen. Consider Arthur C Clarke’s murderous computer HAL, or Battlestar Galactica’s Cylon babes gone wild — computers built to serve, which evolved to destroy their creators. The more like us our machines become, the more dangerous and unnerving they seem.

But perhaps it is not the creation of new life that we fear, so much as the potential for unpredictable emergent behaviour. Evolution certainly offers that. Take viruses: like Lipson’s machines, these organisms exist in the grey area between life and non-life, yet they are among the most rapidly evolving entities on the planet. They are also some of the most destructive; the Spanish Flu of 1918 killed around 50 million people, and some scientists fear that the emergence of some kind of Armageddon virus is only a matter of time. From this point of view, it doesn’t matter whether viruses are alive or dead. All that matters is that they are highly evolvable and unpredictable.

And here’s where things do get scary. If viruses can evolve within hours, computer code can do it within fractions of a second. Viruses are dumb; computers have processors that might some day surpass our own brains — some would say they already have. If we are going to take the risk of giving machines, in Lipson’s words, ‘so much freedom’, we need a good reason to do it. In Out of Control, Kelly proposes one possible reason. Perhaps, he says, the world has become such a complicated place that we have no other choice but to enable the marriage between the biologic and the technologic; without it, the problems we face are too difficult for our human brains to solve. Kelly proposes a kind of Faustian pact: ‘The world of the made, will soon be like the world of the born: autonomous, adaptable and creative but, consequently, out of our control. I think that’s a great bargain.’

According to Lipson, an evolvable system is ‘the ultimate artificial intelligence, the most hands-off AI there is, which means a double edge. It’s powerful. All you feed it is power and computing power. It’s both scary and promising.’ More than 60 years ago, MANIAC was created to ‘solve the unsolvable’. What if the solution to some of our present problems requires the evolution of artificial intelligence beyond anything we can design ourselves? Could an evolvable program help to predict the emergence of new flu viruses? Or the effects of climate change? Could it create more efficient machines? And once a truly autonomous, evolvable robot emerges, how long before its descendants (assuming they think favourably of us) make a pilgrimage to Lipson’s lab, where their ancestor first emerged from a primordial soup of wires and plastic to take its first steps on Earth?

Read more essays on , and

Comments

  • Lyric
  • Sam

    Interesting article, but a little quibble: what definition of 'self-aware' is being used here? Is it being used to indicate that the machine is able to alter its own programming in response to new stimuli? Sorry, just a little hang-up, as most would use the phrase as indicating sentience, such that a machine had developed conscious experiences.

    • http://www.aeonmagazine.com/ Ed Lake

      Hi Sam, thanks for the comment. I edited this piece. Self-aware is being used in something closer to the first of your two senses; perhaps Emily can expand on this, but I believe the point is that the robot has an abstract model of itself that adjusts to new information. Philosophers commonly distinguish between self-awareness and sentience, sentience being very hard to explain and self-awareness being quite straightforward (unless you insist that sentience is a necessary condition for it).

      • Emily

        That's correct, "self-aware" in that the robot is not programmed to walk nor does it know what it looks like (how many legs or how they can move), but is eventually able to develop a predictive model of itself and based on that model move itself forward. If altered, say, by removing a leg, it will develop a new model and learn how to move along with fewer robotic limbs.

        • Larry

          Well, for now the distinction between the two is a necessary accommodation to keep from being burned at the stake or the modern equivalent. Seriously, we are on a track toward these things converging. When the differential equations reach total imcomprehensibility, we won't be able to tell the difference.

  • L

    Great article!

    An interesting side note is that 'symbiogenesis theory' is still very much alive and well when it comes to explaining the origin of more complex eukaryotic cells. Check out http://en.wikipedia.org/wiki/Endosymbiotic_theory

    • Emily

      Thanks for the comment. I had a bit in the original about Lynn Margulis. Her work on symbiosis and the origins of eukaryotic cells transformed many fields, particularly evolutionary biology. But, I removed that bit to stay on topic, so I am glad you brought this up.

      I think it is also increasingly clear that there is a lot of gene swapping or sharing going on between species (or more technically, horizontal gene transfer.)

      • theadvancedapes

        "it is also increasingly clear that there is a lot of gene swapping or sharing going on between species (or more technically, horizontal gene transfer.)"

        Yes, that is what I was going to bring up. And don't biochemists now know that the first life on Earth evolved solely through horizontal gene transfer? Whenever trying to conceptualize the evolution of early life I think of an interconnected web of single cells colonies gradually transforming via sharing genetic mutations. Is this an accurate conception of early life?

        • Emily

          I am not sure there is any consensus (there may be, and I just don't know) but it does seem as you suggest, that any thoughts of a very early "family tree" seem to be rapidly dissipating towards more communal sharing of genetic material.

  • Roy Niles

    "Viruses are dumb; computers have processors that might some day surpass our own brains"

    Except that viruses use the same "predictive probability" form of biological intelligence that has allowed all life forms to evolve themselves. And that allowed those forms to eventually invent computers! And until those computers understand how such inferential trial and error intelligence differs from binary machine code computer processing, they won't be able to effectively simulate the independent living process and become independently able to successfully compete with it.

  • witheo

    Is that a tongue in your cheek, or are you just happy to pull my leg?

    Self-aware machines?

    Ed Lake’s brave attempt at a suitably reassuring disclaimer only pours heavy crude on already deeply troubled waters.

    He writes: “I believe the point is that the robot has an abstract model of itself that adjusts to new information.”

    What? An abstract model of itself? Surely not. It has algorithms, precisely written to fetch and process sensory data. That’s it. It has no need for a model of itself. What self? It cannot recognise data as representative of anything other than data. If the data is not in the right format, the square peg will not go in the round hole. Basta.

    Then: “Philosophers commonly distinguish between self-awareness and sentience, sentience being very hard to explain and self-awareness being quite straightforward (unless you insist that sentience is a necessary condition for it).”

    Philosophers? Names? Phone numbers? Who can distinguish between something that is very hard to explain and something else that isn’t? Well, fancy. Who wouldda thunk?

    Look, as if it were not insulting enough, to most time-rich, cash-poor denizens of the Internet, to describe a $2,500 3D printer for making “a dolls’ house out of spray-cheese” as “low-cost” requiring “some tech know-how”. But to gush breathlessly that Lipson wants “to evolve something that is life”, is truly beyond a joke.

    “Lipson exudes a youthful kind of curiosity.” Yeah, right. That’s what killed the cat. Remember Schrödinger and his hapless pussy, famously sacrificed on the alter of scientific research? No animal rights them days, to save that wretched cat. Even Einstein was in on it. Once curiosity gets the better of you and you open the box, BIG BANG, the cat’s history.

    That said, Curiosity is doing an absolutely marvellous job on Mars, even as we speak. Without the slightest existential angst, mind you, about being self-aware and all. Of course, you can lead your Rover to water. You can even make it drink. And drill all the rock and roll and take in all the sights. But, guess what. You can’t make it appreciate any of it. And you can’t program it, no matter how sophisticated the circuitry gets, to change its mind, just for the hell of it. A robot simply will not get the joke. Isn’t that why it’s there? On a red, dead, planet far, far way? Never wanting donuts. Never gets homesick. It’s got a disk drive for sure. Plays Lady Gaga, for all I know, or care. Just no sex drive.

    What is life? And what is self-awareness?

    My dog is alive. Not because he thinks he’s alive. No. Because I said so. Why? Because I can talk. I’ve got the words to make things real. Before our species invented language, folks didn’t know they were alive, that they need to eat and breathe and drink water. They just did what came naturally, mindlessly hunting and gathering, like we do in the supermarket, intuitively, according to genetically inherited instinct.

    There was never any discussion about what or where will we eat. Having sex was not much fun either. What’s to like, when you have no sweet nothings to whisper in that filthy ear? Before the species learned to talk about it, copulation was as purely functional, furtively risky and about as exciting as all your other essential bodily functions.

    Animals don’t breed for fun, have you noticed? And they sure as hell don’t feel like it when they’re locked up in a zoo. All creatures are notoriously coy when it comes to the sticky business of self-replication. Artificial insemination is the only way to get commercially viable results.

    So, as for robots having “genderless sex”, you gotta be kidding me. Gender has nothing to do with sex. God knows, sex has nothing to do with having sex. Having sex is a human construct. A perfectly simple physical activity, for the perfectly utilitarian purpose of procreation, has evolved into an immensely complicated, totally language based, social activity. Isn’t that fantastic? For which, incidentally, your particular sexual orientation is ridiculously superfluous. Whereas gender is a purely grammatical category, feminine or masculine. So, when the French speak of “la table”, or the Germans refer to “die Tür”, they don’t actually mean the table or door is sexually oriented as female.

    That is why God ended up being masculine, not male. Ships are traditionally feminine, not female. Which simple fact should have gone quite a long way to demystifying all the feminist hysteria about masculine “chairmen” and landing “man” on the Moon, if only it hadn’t been for all the patriarchal hysteria about power and division of labour.

    My dog does not know he’s alive. He does not even know he’s a dog. He does not know he has feet, eyes, a mouth. He doesn’t know shit. Though he works pretty good in that department. The bird in the tree is surrounded by moving branches and foliage, unperturbed. But, as soon as I approach, it flies away. Why? It doesn’t know. It’s brain reacts to unfamiliar input and the response is instantaneous, a natural reflex. It does not need to be self-aware. It doesn’t need that kind of anxiety.

    The bird is not in the least interested in staying alive. Because its existence does not in any sense rely on cognitive language. It simply reacts to stimuli. My dog is not happy to see me when I come home. He has no idea what happiness is. Of all the luck. He’s just going through the motions common to his species.

    We talk a lot, my dog and I. He understands lots of words for their sound, volume and context. But it’s all habitual routine with him. I’m the leader of the pack. I’ve returned from the hunt. Wag that tail. Open mouth, tongue hanging out. No eye contact. Demonstrate abject submission. Lest there be nothing to eat. He didn’t carefully memorise this typically canine, exuberant choreography. He’s a dog, it goes with the scenery. He’s got no choice.

    Robots do not replicate life. What is replicated is a verisimilitude of highly sophisticated, clumsy actions we have come to associate, in the infinitely complex narratives we have been constructing ever since we learned to say “Mommy”, with “living things”. I only know I’m alive because I’ve got a story to tell, and then some. Wake me when you have a robot that can pick just the right moment to tell a fantastic joke … only to forget the punch line. Now that would be funny.

  • sdfgsdfg

    fdfhfh

or newsletter