Endless fun

The question is not whether we can upload our brains onto a computer, but what will become of us when we do

by 3500 3,500 words
  • Read later or Kindle
    • KindleKindle
'You could walk around a simulated city street, feel a cool breeze, enjoy yourself.' Photo courtesy IGN/Playstation Home

'You could walk around a simulated city street, feel a cool breeze, enjoy yourself.' Photo courtesy IGN/Playstation Home

Michael Graziano is a neuroscientist, novelist and composer. He is a professor of neuroscience at Princeton University. His latest book is Consciousness and the Social Brain.

In the late 1700s, machinists started making music boxes: intricate little mechanisms that could play harmonies and melodies by themselves. Some incorporated bells, drums, organs, even violins, all coordinated by a rotating cylinder. The more ambitious examples were Lilliputian orchestras, such as the Panharmonicon, invented in Vienna in 1805, or the mass-produced Orchestrion that came along in Dresden in 1851.

But the technology had limitations. To make a convincing violin sound, one had to create a little simulacrum of a violin — quite an engineering feat. How to replicate a trombone? Or an oboe? The same way, of course. The artisans assumed that an entire instrument had to be copied in order to capture its distinctive tone. The metal, the wood, the reed, the shape, the exact resonance, all of it had to be mimicked. How else were you going to create an orchestral sound? The task was discouragingly difficult.

Then, in 1877, the American inventor Thomas Edison introduced the first phonograph, and the history of recorded music changed. It turns out that, in order to preserve and recreate the sound of an instrument, you don’t need to know everything about it, its materials or its physical structure. You don’t need a miniature orchestra in a cabinet. All you need is to focus on the one essential part of it. Record the sound waves, turn them into data, and give them immortality.

Imagine a future in which your mind never dies. When your body begins to fail, a machine scans your brain in enough detail to capture its unique wiring. A computer system uses that data to simulate your brain. It won’t need to replicate every last detail. Like the phonograph, it will strip away the irrelevant physical structures, leaving only the essence of the patterns. And then there is a second you, with your memories, your emotions, your way of thinking and making decisions, translated onto computer hardware as easily as we copy a text file these days.

That second version of you could live in a simulated world and hardly know the difference. You could walk around a simulated city street, feel a cool breeze, eat at a café, talk to other simulated people, play games, watch movies, enjoy yourself. Pain and disease would be programmed out of existence. If you’re still interested in the world outside your simulated playground, you could Skype yourself into board meetings or family Christmas dinners.

This vision of a virtual-reality afterlife, sometimes called ‘uploading’, entered the popular imagination via the short story ‘The Tunnel Under the World’ (1955) by the American science-fiction writer Frederik Pohl, though it also got a big boost from the movie Tron (1982). Then The Matrix (1999) introduced the mainstream public to the idea of a simulated reality, albeit one into which real brains were jacked. More recently, these ideas have caught on outside fiction. The Russian multimillionaire Dmitry Itskov made the news by proposing to transfer his mind into a robot, thereby achieving immortality. Only a few months ago, the British physicist Stephen Hawking speculated that a computer-simulated afterlife might become technologically feasible.

It is tempting to ignore these ideas as just another science-fiction trope, a nerd fantasy. But something about it won’t leave me alone. I am a neuroscientist. I study the brain. For nearly 30 years, I’ve studied how sensory information gets taken in and processed, how movements are controlled and, lately, how networks of neurons might compute the spooky property of awareness. I find myself asking, given what we know about the brain, whether we really could upload someone’s mind to a computer. And my best guess is: yes, almost certainly. That raises a host of further questions, not least: what will this technology do to us psychologically and culturally? Here, the answer seems just as emphatic, if necessarily murky in the details.

It will utterly transform humanity, probably in ways that are more disturbing than helpful. It will change us far more than the internet did, though perhaps in a similar direction. Even if the chances of all this coming to pass were slim, the implications are so dramatic that it would be wise to think them through seriously. But I’m not sure the chances are slim. In fact, the more I think about this possible future, the more it seems inevitable.

If did you want to capture the music of the mind, where should you start? A lot of biological machinery goes into a human brain. A hundred billion neurons are connected in complicated patterns, each neurone constantly taking in and sending signals. The signals are the result of ions leaking in and out of cell membranes, their flow regulated by tiny protein pores and pumps. Each connection between neurons, each synapse, is itself a bewildering mechanism of proteins that are constantly in flux.

It is a daunting task just to make a plausible simulation of a single neurone, though this has already been done to an approximation. Simulating a whole network of interacting neurons, each one with truly realistic electrical and chemical properties, is beyond current technology. Then there are the complicating factors. Blood vessels react in subtle ways, allowing oxygen to be distributed more to this or that part of the brain as needed. There are also the glia, tiny cells that vastly outnumber neurons. Glia help neurons function in ways that are largely not understood: take them away and none of the synapses or signals work properly. Nobody, as far as I know, has tried a computer simulation of neurons, glia, and blood flow. But perhaps they wouldn’t have to. Remember Edison’s breakthrough with the phonograph: to faithfully replicate a sound, it turns out you don’t also have to replicate the instrument that originally produced it.

So what is the right level of detail to copy if you want to capture a person’s mind? Of all the biological complexity, what patterns in the brain must be reproduced to capture the information, the computation, and the consciousness? One of the most common suggestions is that the pattern of connectivity among neurons contains the essence of the machine. If you could measure how each neurone connects to its neighbours, you’d have all the data you need to re-create that mind. An entire field of study has grown up around neural network models, computer simulations of drastically simplified neurons and synapses. These models leave out the details of glia, blood flow, membranes, proteins, ions and so on. They only consider how each neurone is connected to the others. They are wiring diagrams.

Simple computer models of neurons, hooked together by simple synapses, are capable of enormous complexity. Such network models have been around for decades, and they differ in interesting ways from standard computer programs. For one thing, they are able to learn, as neurons subtly adjust their connections to each other. They can solve problems that are difficult for traditional programs, and are particularly good at taking noisy input and compensating for the noise. Give a neural net a fuzzy, spotty photograph, and it might still be able to categorise the object depicted, filling in the gaps and blips in the image — something called pattern completion.

Despite these remarkably human-like capacities, neural network models are not yet the answer to simulating a brain. Nobody knows how to build one at an appropriate scale. Some notable attempts are being made, such as the Blue Brain project and its successor, the EU-funded Human Brain Project, both run by the Swiss Federal Institute of Technology in Lausanne. But even if computers were powerful enough to simulate 100 billion neurons — and computer technology is pretty close to that capability — the real problem is that nobody knows how to wire up such a large artificial network.

In some ways, the scientific problem of understanding the human brain is similar to the problem of human genetics. If you want to understand the human genome properly, an engineer might start with the basic building blocks of DNA and construct an animal, one base pair at a time, until she has created something human-like. But given the massive complexity of the human genome — more than 3 billion base pairs — that approach would be prohibitively difficult at the present time. Another approach would be to read the genome that we already have in real people. It is a lot easier to copy something complicated than to re-engineer it from scratch. The human genome project of the 1990s accomplished that, and even though nobody really understands it very well, at least we have a lot of copies of it on file to study.

The same strategy might be useful on the human brain. Instead of trying to wire up an artificial brain from first principles, or training a neural network over some absurdly long period until it becomes human-like, why not copy the wiring already present in a real brain? In 2005, two scientists, Olaf Sporns, professor of brain sciences at Indiana University, and Patric Hagmann, neuroscientist at the University of Lausanne, independently coined the term ‘connectome’ to refer to a map or wiring diagram of every neuronal connection in a brain. By analogy to the human genome, which contains all the information necessary to grow a human being, the human connectome in theory contains all the information necessary to wire up a functioning human brain. If the basic premise of neural network modelling is correct, then the essence of a human mind is contained in its pattern of connectivity. Your connectome, simulated in a computer, would recreate your conscious mind.

It seems a no-brainer (excuse the pun) that we will be able to scan, map, and store the data on every neuronal connection within a person’s head

Could we ever map a complete human connectome? Well, scientists have done it for a roundworm. They’ve done it for small parts of a mouse brain. A very rough, large-scale map of connectivity in the human brain is already available, though nothing like a true map of every idiosyncratic neurone and synapse in a particular person’s head. The National Institutes of Health in the US is currently funding the Human Connectome Project, an effort to map a human brain in as much detail as possible. I admit to a certain optimism toward the project. The technology for brain scanning improves all the time. Right now, magnetic resonance imaging (MRI) is at the forefront. High-resolution scans of volunteers are revealing the connectivity of the human brain in more detail than anyone ever thought possible. Other, even better technologies will be invented. It seems a no-brainer (excuse the pun) that we will be able to scan, map, and store the data on every neuronal connection within a person’s head. It is only a matter of time, and a timescale of five to 10 decades seems about right.

Of course, nobody knows if the connectome really does contain all the essential information about the mind. Some of it might be encoded in other ways. Hormones can diffuse through the brain. Signals can combine and interact through other means besides synaptic connections. Maybe certain other aspects of the brain need to be scanned and copied to make a high-quality simulation. Just as the music recording industry took a century of tinkering to achieve the impressive standards of the present day, the mind-recording industry will presumably require a long process of refinement.

That won’t be soon enough for some of us. One of the basic facts about people is that they don’t like to die. They don’t like their loved ones or their pets to die. Some of them already pay enormous sums to freeze themselves, or even (somewhat gruesomely) to have their corpses decapitated and their heads frozen on the off-chance that a future technology will successfully revive them. These kinds of people will certainly pay for a spot in a virtual afterlife. And as the technology advances and the public starts to see the possibilities, the incentives will increase.

One might say (at risk of being crass) that the afterlife is a natural outgrowth of the entertainment industry. Think of the fun to be had as a simulated you in a simulated environment. You could go on a safari through Middle Earth. You could live in Hogwarts, where wands and incantations actually do produce magical results. You could live in a photogenic, outdoor, rolling country, a simulation of the African plains, with or without the tsetse flies as you wish. You could live on a simulation of Mars. You could move easily from one entertainment to the next. You could keep in touch with your living friends through all the usual social media.

I have heard people say that the technology will never catch on. People won’t be tempted because a duplicate of you, no matter how realistic, is still not you. But I doubt that such existential concerns will have much of an impact once the technology arrives. You already wake up every day as a marvellous copy of a previous you, and nobody has paralysing metaphysical concerns about that. If you die and are replaced by a really good computer simulation, it’ll just seem to you like you entered a scanner and came out somewhere else. From the point of view of continuity, you’ll be missing some memories. If you had your annual brain-backup, say, eight months earlier, you’ll wake up missing those eight months. But you will still feel like you, and your friends and family can fill you in on what you missed. Some groups might opt out — the Amish of information technology — but the mainstream will presumably flock to the new thing.

And then what? Well, such a technology would change the definition of what it means to be an individual and what it means to be alive. For starters, it seems inevitable that we will tend to treat human life and death much more casually. People will be more willing to put themselves and others in danger. Perhaps they will view the sanctity of life in the same contemptuous way that the modern e-reader crowd views old fogeys who talk about the sanctity of a cloth-bound, hardcover book. Then again, how will we view the sanctity of digital life? Will simulated people, living in an artificial world, have the same human rights as the rest of us? Would it be a crime to pull the plug on a simulated person? Is it ethical to experiment on simulated consciousness? Can a scientist take a try at reproducing Jim, make a bad copy, casually delete the hapless first iteration, and then try again until he gets a satisfactory version? This is just the tip of a nasty philosophical iceberg we seem to be sailing towards.

In many religions, a happy afterlife is a reward. In an artificial one, due to inevitable constraints on information processing, spots are likely to be competitive. Who decides who gets in? Do the rich get served first? Is it merit-based? Can the promise of resurrection be dangled as a bribe to control and coerce people? Will it be withheld as a punishment? Will a special torture version of the afterlife be constructed for severe punishment? Imagine how controlling a religion would become if it could preach about an actual, objectively provable heaven and hell.

Then there are the issues that will arise if people deliberately run multiple copies of themselves at the same time, one in the real world and others in simulations. The nature of individuality, and individual responsibility, becomes rather fuzzy when you can literally meet yourself coming the other way. What, for instance, is the social expectation for married couples in a simulated afterlife? Do you stay together? Do some versions of you stay together and other versions separate?

If a brain has been replaced by a few billion lines of code, we might understand how to edit any destructive emotions right out of it

Then again, divorce might seem a little melodramatic if irreconcilable differences become a thing of the past. If your brain has been replaced by a few billion lines of code, perhaps eventually we will understand how to edit any destructive emotions right out of it. Or perhaps we should imagine an emotional system that is standard-issue, tuned and mainstreamed, such that the rest of your simulated mind can be grafted onto it. You lose the battle-scarred, broken emotional wiring you had as a biological agent and get a box-fresh set instead. This is not entirely far-fetched; indeed, it might make sense on economic rather than therapeutic grounds. The brain is roughly divisible into a cortex and a brainstem. Attaching a standard-issue brainstem to a person’s individualised, simulated cortex might turn out to be the most cost-effective way to get them up and running.

So much for the self. What about the world? Will the simulated environment necessarily mimic physical reality? That seems the obvious way to start out, after all. Create a city. Create a blue sky, a pavement, the smell of food. Sooner or later, though, people will realise that a simulation can offer experiences that would be impossible in the real world. The electronic age changed music, not merely mimicking physical instruments but offering new potentials in sound. In the same way, a digital world could go to some unexpected places.

To give just one disorientating example, it might include any number of dimensions in space and time. The real world looks to us to have three spatial dimensions and one temporal one, but, as mathematicians and physicists know, more are possible. It’s already possible to programme a video game in which players move through a maze of four spatial dimensions. It turns out that, with a little practice, you can gain a fair degree of intuition for the four-dimensional regime (I published a study on this in the Journal of Experimental Psychology in 2008). To a simulated mind in a simulated world, the confines of physical reality would become irrelevant. If you don’t have a body any longer, why pretend?

All of the changes described above, as exotic as they are and disturbing as some of them might seem, are in a sense minor. They are about individual minds and individual experiences. If uploading were only a matter of exotic entertainment, literalising people’s psychedelic fantasies, then it would be of limited significance. If simulated minds can be run in a simulated world, then the most transformative change, the deepest shift in human experience, would be the loss of individuality itself — the integration of knowledge into a single intelligence, smarter and more capable than anything that could exist in the natural world.

You wake up in a simulated welcome hall in some type of simulated body with standard-issue simulated clothes. What do you do? Maybe you take a walk and look around. Maybe you try the food. Maybe you play some tennis. Maybe go watch a movie. But sooner or later, most people will want to reach for a cell phone. Send a tweet from paradise. Text a friend. Get on Facebook. Connect through social media. But here is the quirk of uploaded minds: the rules of social media are transformed.

Real life, our life, will shrink in importance until it becomes a kind of larval phase

In the real world, two people can share experiences and thoughts. But lacking a USB port in our heads, we can’t directly merge our minds. In a simulated world, that barrier falls. A simple app, and two people will be able to join thoughts directly with each other. Why not? It’s a logical extension. We humans are hyper-social. We love to network. We already live in a half-virtual world of minds linked to minds. In an artificial afterlife, given a few centuries and few tweaks to the technology, what is to stop people from merging into überpeople who are combinations of wisdom, experience, and memory beyond anything possible in biology? Two minds, three minds, 10, pretty soon everyone is linked mind-to-mind. The concept of separate identity is lost. The need for simulated bodies walking in a simulated world is lost. The need for simulated food and simulated landscapes and simulated voices disappears. Instead, a single platform of thought, knowledge, and constant realisation emerges. What starts out as an artificial way to preserve minds after death gradually takes on an emphasis of its own. Real life, our life, shrinks in importance until it becomes a kind of larval phase. Whatever quirky experiences you might have had during your biological existence, they would be valuable only if they can be added to the longer-lived and much more sophisticated machine.

I am not talking about utopia. To me, this prospect is three parts intriguing and seven parts horrifying. I am genuinely glad I won’t be around. This will be a new phase of human existence that is just as messy and difficult as any other phase has been, one as alien to us now as the internet age would have been to a Roman citizen 2,000 years ago; as alien as Roman society would have been to a Natufian hunter-gatherer 10,000 years before that. Such is progress. We always manage to live more-or-less comfortably in a world that would have frightened and offended the previous generations.

Comments

  • Robert McMahon

    I understand the logic of this sort of thing, but the brain is only part of the human organism and doesn't account for everything that goes on in the body that creates the mind. There's something cartesian in the thinking here still, as it thinks of the mind as trapped in the brain instead of merely being a state of the body with the brain being a center for interpretation.

    Also, there's something about this that seems to be, in my opinion, creating golems instead of creating immortal humans. There is a book that came out recently, Incomplete Nature, that goes into is great detail about the possible nature of the mind and consciousness. Bodies are essentially collections of energy, some of which expresses itself as matter, and you really can't simulate the experience of personhood with trying to reorder those energies in another machine to match just one part of the experience.

    To me, it sounds like a kind of hell. I echo your wary sentiment at the end, definitely.

    • http://www.livinginthehereandnow.co.za/ beachcomber

      Yep ... the gut and the bacteria which co-habit there have recently been found to influence our physiological and in consequence to an extent, our brains.

    • aelena74

      a vastly simpler but similar scenario: if you backup all that's sitting on your computer now and put all the data and software into a new machine you just bought, is it or not your computer? Certain things may vary, such as a better or worse display, RAM amount and / or cpu speed or count, but those are secondary. You'd still have there all your data and software, so what? you can continue working all the same, even if the keyboard layout is a bit different.

    • G

      I came to the same conclusion months ago: "creating golems," for which I coined the term "Neogolemism" for this type of emerging computer-religion.

      • Robert McMahon

        Haha, that's perfect.

        • G

          I've been "on the case" of this stuff for a while now, as a telecoms engineer with a strong interest in how technology and cultural variables affect each other. In the early internet era, the tech was largely liberating. Today it's all about "prediction and control" of human behaviour down to the level of the individual person: very bad. What I see emerging is a meme-set that consists of:

          a) Denial of free will, often hitch-hiking on rationalist beliefs, even though it is not logically or empirically necessary that a materialist theory of mind be deterministic (see also Penrose & Hameroff for a theory of neural computation that entails free will; BTW they are strongly disliked by the Neogolemists).

          b) Rise of beliefs such as "Upload" and "Singularitarianism" (which I call "Singularitology" by way of resemblance to Scientology), that collectively are "computer-god religions" attempting to replace traditional "sky-god religions" particularly among rationalists, agnostics, and atheists.

          c) Leaderships of certain large private-sector entities (Facebook,Google) having these beliefs and seeking to embody them in their companies' public-facing services and infrastructures, thereby affecting the culture at-large accordingly.

          d) In conjunction with or rationalized by (a), moral/ethical systems that do not prohibit or limit manipulation of other persons for one's own gain.

          And a few other bits & pieces related to the above.

          As a generalization, we are in a period of history where religious and philosophical systems of thought are radically changing, new ones are emerging, and some of the new ones will come to form the foundation of mainstream beliefs generally. By "religion" I also include non-theistic belief systems in which non-theological elements play the same roles as traditional theological elements, e.g. Upload as hereafter, Computer as deity, Market as morality.

          Thus it's incumbent upon all of us who value our personhood, free will, freedom of thought, and cultures that embody these values, to assert our beliefs & principles vigorously, and engage in ferocious public debate over all of these issues. I'll be publishing something lengthy toward this end, some time this year.

    • http://www.appbattleground.com/ Shane K.

      Great post. It also appears to be an artifact of a western way of thinking. The notion of creating a virtual afterlife for oneself is a form of clinging and an inability to let go of one's ego.

      The funny thing about all this is that not even a migration of one's mind into a computer could stave off the inevitability of death, for even machines are destructible.

      Whether the physical universe experiences a heat death, or contracts upon itself, or suffers some other fate, machines will be unable to exist forever. All things must come to an end...

  • B M

    What if we're already living in a simulation? Ponder that.

    • TypicalMoron

      We are, literally.

      Our brain is simulating external stimuli and we interpret it to the best of our ability.

      Some animals can see some infrared spectrum while we can't. Their simulation software is different than ours, is all.

  • TrustbutVerify

    It would seem that this may develop in concert with quantum computers and 3D nanoprinting. If the 3D printer can replicate the neural pathways in a suitable matrix at a nano-scale that will support the operations of a quantum computer, it should be possible to replicate and host the electronic signature of the brain in this duplicate. Whether the brain is then tethered to a Matrix-type environment or a powered automaton is then a matter of choice - experiencing new electronic worlds, continuing to experience our world, or dipping in and out of both.

    There does seem to be a problem with consciousness, though, as it does not seem that the actual essence of the person would be preserved - you wouldn't close your eyes and feel like you woke up in this new environment. You would close your eyes and a duplicate of you would open its electronic eyes - but "you" would not feel it, your unique consciousness could not be transferred, I think, by simply replicating the environment and the coding. It might seem to be "you" to others as an outward expression of your familiar thoughts, actions, memory, feelings - but it would just be a copy and not an extension of your conscious mind before the transfer. It would be a very faithful, detailed reproduction, but there would be no continuity of mind. It would be an interactive picture or video for your loved ones after your death, but it wouldn't continue your life. I think only a direct, physical, transfer of the same brain, or portion of the brain, would be necessary to get that continuity. Perhaps some way of meshing the two - organic with inorganic/electronic?

    • TypicalMoron

      This.

      No one will have their mind "uploaded" to a computer.

      They may have an accurate representation of how their particular brain interprets data, and may be indistinguishable from you to other people - but it won't be "you".

  • Julian

    Sorry, but that still won't be immortality. The original and the copy will be two different conscious entities. The intuition is very simple to grasp: if technology advances enough that we can copy a brain, surely it won't be necessary to destroy the original in the process. The original (person) and the copy will therefore be able to co-exist as separate entities, experiencing the world in different ways, and therefore having separate threads of awareness. That also applies for the copy and the copy of the copy, etc.

    Of course, that's not to say that people won't do this, if the technology progresses that far. It's just that they won't get quite as much as they would hope.

    • Robert McMahon

      Stuff like this makes me wonder what Kurzweil is up to at Google, heh.

      • G

        He's providing thrills and comforts to Sergey Brin, who also believes in this nonsense (who is using who, if it's mutual?). Folks who work at Google report that one of the central cultural memes there, is that Google itself, the networked machine and its software, can "think" better than humans can. This really is the seed of a new religion: one in which the Computer is God, Upload is the hereafter, free will is an illusion, and moral principles such as compassion are as obsolete as the slogan "don't be evil." Its core premise of minds transferred from biological brains to deterministic computing platforms, is false; but the money to be made at hosting simulations that will woo their surviving loved ones to "ditch their meat" and join the Borg, will be real and have a potentially widespread pernicious effect.

    • Bruce Wayne

      Exactly right.

    • VoiceofReasonableness

      Basically correct, except for one possible out. And that is if a copy is made that is a "perfect" copy down to the molecular level, and (and this is required) in order to do so the original *must* absolutely be completely destroyed. It may be, in the world of quantum weirdness, that the combination of these two factors mandates that the copy is now the equivalent of the original, in the sense that it becomes the same entity.
      Or this is just BS.

      • TypicalMoron

        Sure, I can posit an exact duplicate brain software program, but it won't be "you". It will be a copy.

        I'm not destroying myself and my consciousness so that a copy can live in a computer. It gets all of the perks and I get to be dead.

    • George Palickar

      It is not at all sure that a copy of a mind can be made without destroying the original. It all has to do with levels of informational implementation. You can copy a usb drive because the hardware exists at a lower level of implementation than the data in the drive. It is "beneath:" the data, in a sense. If a mind is actually implemented at a deep quantum mechanical level, there conceivably may be no way to 'read' the data without disturbing the original state, because the 'reading' machine would not be at a lower level than the data it needs to read. A similar paradox exists in the case of a simulated program running within an emulator. Given the emulator is perfect, it is impossible for the simulated program to 'know' if the platform it is running on is physical or emulated.

      • aelena74

        proposed techniques for non destructive brain scans

        http://www.ibiblio.org/jstrout/uploading/nondestructive.html

        very different then, what is the degree of feasibility of each of them

        • G

          That still doesn't get you immortality any more than perfect nondestructive cloning. The copy may or may not live, but you still die.

          • aelena74

            Yes, I was just mentioning the techniques. It's a very different debate to decide what is "you" (as in "you still die"). IMHO, if all your data is backed up into a new body, "you" are still "you", whatever "one" is

          • G

            What is "you" is the original, not the copy. "Backing up your data" does not alter the outcome. Either there's some kind of hereafter or there's nothing, but there's no immortality via "copying." I am at a complete loss to understand why people don't get this.

          • aelena74

            Not so sure the concept of "you" or "oneness" is that clear or obvious. We get into an uneasy realm of semantics here. You buy a new computer and put the full backup of all software and data you had there from your previous computer. So now the new computer is essentially "your computer" in the same way the previous one was too (same data, same folders, same programs). Is the computer "immortal"? no. but it is essentially the same computer.

          • G

            This is the central problem: Minds and memory are N-O-T computers and data on magnetic media, any more than they are telephone exchanges, telegraphs, or steam engines, to use the favourite technological analogies of previous eras.

            Really: please do disengage from the computer analogy, it is, as physicists say, "not even wrong;" it serves the present purpose about as well as pouring lemonade into your automobile's petrol tank.

            For one thing, the whole idea of "making a snapshot of a brain" is not even possible within any known or anticipated laws of physics and chemistry. The very act of attempting to do so will destroy the information being scanned, and alter all of the other connected information that's present. See also my other comments for more about this, keyword search "snapshot" on this page.

          • E

            This. Agreed. The mind is its own entity and is simply irreducible.

          • Len Arends

            Irreducible. Really? So the brain doesn't create the mind, it's just an antenna? Where is the mind transmitting from, then?

            Flatworm to fish to lizard to monkey to human: minds getting more complex. Run time backwards, and clearly the mind IS reducible.

          • Len Arends

            Well, it sounds like you've got this all figured out, G. You are certain that consciousness is quantum mechanical and resides in the uncollapsed state. Wasn't aware this is settled science. Hope my sarcasm is obvious.

          • ApathyNihilism

            No one thinks it's the same computer.

          • ApathyNihilism

            If consciousness is a box of chocolates, then theoretically it can be eaten.

          • Len Arends

            We die every second, reborn as a similar being with slightly greater experience. Continuity is all that is necessary to preserve the sense of self.

    • Kevin Middleton

      Though, might hope be relative to the perceiving entity? An underdeveloped clone's consciousness could adapt mentally to its conscious experience in much the way that we adapt from birth to our own. This, if it gave rise to the development of a mental framework nonparallel to ours, within which it could experience a different conception of "hope" that we would be in no position to morally evaluate.

    • G

      Exactly correct. It's the same case as with cloning: the copy is not the original; your clone lives on, but when you die, you are still dead.

      The idea that you can transplant your mind from your brain to a computer necessarily requires separating your mind from your brain in the process, with forward continuity of experience: as with going to sleep, knowing that you will wake up the next day. But if you can detach a mind from a brain, then what you have is a _soul_, and there is no reason to confine it to a silicon prosthesis when there are far more interesting vistas to explore.

      If you can reincarnate into a computer, you can reincarnate into a cat. But if you can't detach a mind from a brain, dead is dead, so say ByeBye.

  • HenryC

    A virtual you would not be you. You would still die.

    • Bruce Wayne

      I think this will be like cloning. There will be two of you. The new electronic-you would be the same as the biological-you at the moment your mind is uploaded but the two of you are independent and over time you will diverge. Biological-you will have her own experiences and electronic-you will have her experiences. You won't be merged, and you could have a conversation with yourself. You won't perceive anything through the other. I imagine the electronic -you "comes to herself"--becomes aware that she exists, and will immediately remember what happened before. The biological-you's experience will be minimal. Like you had an MRI performed. Nothing is taken from you or added--just a scan was made to make a picture. You have just created a second you that exists in the electronic environment. So after a while, biological you will die, but electronic you continues on.

      • SaintMarx

        Why assume the clone is conscious at all?

        • Casper

          Never mind the clone. Given the grossly simplistic ways most people respond to the world, I am not at all sure that more than a few people are conscious.

      • http://www.livinginthehereandnow.co.za/ beachcomber

        See http://en.wikipedia.org/wiki/Ship_of_Theseus .... The ship of Theseus, also known as Theseus's paradox, is a paradox that raises the question of whether an object which has had all its components replaced remains fundamentally the same object.

    • hypnosifl

      The "you" of today is made out of an almost completely different set of atoms than the "you" of a few years ago, thanks to your cells constantly rebuilding themselves out of nutrient molecules. So if you consider the "you" of a few years ago to be the same as the "you" of today, it seems like the only continuity is continuity of pattern or information, and that continuity would still exist with a virtual brain and the original biological brain.

      • smoochie

        Horse crap. Your sense of self is not something you can just wish into a computer. We're talking about a copy, not something that replaces atoms gradually over time. Your copy might feel like you and be indistinguishable in many ways, but it wouldn't BE you and you'd still be dead. Think about it: what if you made a copy of yourself and you WEREN'T dead? Would you experience two lives simultaneously? NO.

        This isn't an afterlife; it's a sophisticated death mask.

        • hypnosifl

          Why should it make whether my structure is "gradually" replaced with the same structure made of new materials over an extended time, or all at once? If we had some sort of fantastic device that could replace every atom in my body within the span of a few nanoseconds, would that still be gradual enough to preserve identity, in your view? And do you believe there is some sort of objective metaphysical truth about what forms of replacement preserve identity (or continuity of consciousness) and which don't, or do you think it's just a matter of how we humans choose to *define* the notion of "same person" or "same consciousness"? If you think it's a matter of objective truth rather than an aesthetic preference about word-definitions, then presumably you can not be a strict materialist, since notions like personal identity and consciousness don't figure into any of the fundamental laws of physics.

          • smoochie

            Look, it's elementary. When you create an identical copy of yourself (let's leave out the virtual stuff just to simplify), there's no question who 'you' are. You are the guy who created the copy. The copy's over there, waving at you. It's not a very difficult thing to determine who 'you' are. You're the guy looking at the copy. You'll know that, because you're you, looking through your eyeballs, talking with your mouth. If you try to move the other guy's mouth, it won't work. I don't really know how to make this simpler. I may not fully understand the physics of personal identity, but that doesn't mean it isn't a real thing. Your clone may be identical; he may even think he's you, but you would know better. And when you die, you still die. The notion that you would then magically possess your avatar and go on experiencing things via him--just no. That's magic, and it isn't real.

          • mikeb666

            Such valid points in this discussion. "Consciousness", what is it really? No one knows. What is the nature of consciousness? It's a mystery. Replicating a brain down to the molecular level, doesn't mean anything at this point. It's wishful thinking. When we create a computer in the very near future that can emulate the computing power of one human brain will it become conscious. What about the latest studies about the heart as an organ of consciousness informing the brain emotionally. Is our consciousness a sum of our whole being, brain, heart, lungs hair, eyes, spleen skin etc...??? If there is credit in that, copying a brain down to the molecular level isn't going to occomplish much. Like one commentator pointed out. If you copy a jazz performance and play the same one over and over, it will never improvise beyond that point. It will just be a fancy toy, but not you!

          • hypnosifl

            What about the latest studies about the heart as an organ of consciousness informing the brain emotionally.

            Do you have a link on this? There are plenty of cases of people who have received artificial parts, and many other parts of the body can be damaged and/or replaced too without the people reporting any change in their consciousness or displaying noticeable changes in behavior, as always seems to happen with significant brain damage.

          • smoochie

            Yeah, I do wonder--even if we can make a virtual person that gives all outward appearances of thought and emotion, could it truly be said to be feeling anything, or just doing an awesome job of acting like it? I'm not sure that's an answerable question.

          • hypnosifl

            This is what philosophers know as the issue of "philosophical zombies"--an issue which David Chalmers, who I mentioned above, discusses extensively in his book The Conscious Mind. It's another case where, if you want to believe there's some objective truth about the matter, you have to assume there are truths about consciousness which go beyond the complete set of truths about the physical world, though the answers might be determined by physical truths if there are "psychophysical laws" tying the two realms together. Chalmers argues that if there are such laws, it's likely they would respect a functional invariance principle that says that systems that function and behave the same have the same types of inner experiences. His argument is based on considering the thought-experiment of gradually replacing the neurons of a person's brain artificial substitutes that function in the same way (same relation between inputs and outputs), and thinking about what would happen with the "qualia" (subjective experiences like the experience of the color red) during such a gradual replacement, anyone who's interested can read the paper here: http://consc.net/papers/qualia.html

          • G

            I've read Chalmers in depth and largely agree with him, but this is one point on which I think he's completely wrong.

            First of all, the assumption that we can duplicate the complete functionality of a neuron in silicon-based hardware, is "not even wrong." Neurons make extensive use of chemical feedback systems, as well as electrical impulses. And neurons may also be quantum computers (Penrose & Hameroff) in and of themselves, each capable of processing as many bits as it has tubulin proteins. But even if it's possible to shrink an entire multi-megabit quantum computer down to microscopic size, you still have the problem of neurochemistry to overcome.

            Second, the issue of replacement cycle. A human brain contains 1-2 billion neurons, each with on average 8,000 local connections. If you replace 1 neuron and its 8,000 connections to other neurons every second, it takes 32-64 years to replace the whole brain (see also my other comments about this point).

            Third, the issue of power source to run the whole thing, and cooling, and so on, both of which are presently accomplished by the blood supply to the brain.

            Fourth and conclusively (laws of physics here), for each neuron you replace, you first have to copy both its classical and quantum state information to the replacement, otherwise information is lost. In order to copy, you have to measure; but a) measuring quantum state information collapses its wave function, and b) measuring the state of the classical elements on that scale will inherently disrupt or destroy them as well. (Find a way to count the molecules in a neuron and map their positions in threespace, without destroying the neuron in which they reside, and you'll get a Nobel in medicine; good luck!). This is also why making a "copy" of the contents of your mind isn't possible.

            Chalmers knows more about neurophysiology than most philosophers out there, but none the less, his thought-experiment is only that, and not something that can be done in the real world.

            He refers to himself as a "property dualist" but not a "substance dualist," but if there are truths about consciousness that don't reduce to biology and chemistry, then they necessarily involve new physics, and if they don't reduce to even a new physics, they necessarily entail some kind of substance dualism.

            Perhaps it's time to bring substance dualism out of the closet. Though, if you can get a mind outside of a brain, there's no reason to confine it to a silicon prosthesis. If you can reincarnate into a computer, you can also reincarnate into a cat; or alternately, go explore the universe without the limits of any kind of material body.

          • G

            Correction: a human brain contains about 85 billion neurons, so a replacement cycle of 1 neuron per second with silicon or other artificial neurons, leads to a total replacement time of about 2,700 years.

            To paraphrase former USA Defence Secretary Rumsfeld, sometimes you just have to think with the brain you have, rather than the brain you wish you had;-)

          • G

            Here we should differentiate between two categories:

            1) "Automata" that don't have the physical hardware for consciousness: their hardware is completely deterministic, their apparent minds are the outputs of algorithms; they may be excellent simulations but they are not conscious. We can already build rudimentary versions of these, but more sophisticated ones are still only equivalent to the clockwork "automata" of the 18th century, e.g. realistic mechanical ducks that ate, pooped, and quacked, but were not living ducks.

            2) "Artificial minds" that do have hardware that can reasonably be expected to produce consciousness. These are still purely speculative and would necessarily be based on hardware that can duplicate, rather than simulate, the physics of neural computation. In all likelihood these will be combinations of classical and quantum computing hardware, and may require more than one architecture of each. If we included chemical feedback systems in them, including neurochemicals and receptors for same, we could also reasonably infer that they had emotions.

            Consciousness is the cardinal attribute of personhood.

            If you treat X as a person, but X is actually an object, at worse you make a fool of yourself.

            If you treat X as an object, but X is actually a person, you're violating their person-rights, at minimum committing something analogous to cruelty to animals, but perhaps committing assault or murder. If you treat the entire class of X as objects but they're actually persons, you're committing an atrocity.

            Creating a true human-level artificial mind has the same moral implications as having a baby.

            An Kurzweil et. al. who envision a future of artificial minds busily doing all the work that humans used to do, are in fact setting the stage for a new form of slavery.

          • G

            Sorry be a bearer of bad news, but current research is demonstrating that what we used to think of as the computing power of the entire brain, is in fact the computing power of a single neuron. That puts "the very near future" quite a bit further off.

            Simulation is not replication. You can build a mechanical clockwork duck that eats, poos, and quacks, but it is not a duck.

            As for copying down to the molecular level, you can't even make the measurements without destroying the information in the neurons. Keyword search "snapshot" for my other comments on this page explaining this point in more detail.

          • hypnosifl

            From an external point of view, there will simply be two individuals that share a near-identical brain pattern (though they begin to diverge as soon as the duplication occurs). If one is a strict materialist, this "objective" point of view is all that's real, there is no additional reality of which version has the "same consciousness" as the single individual before the duplication.

            If we want to discuss the subjective point of view, *both* individuals have exactly the experience you describe. They both remember stepping into the duplication machine, and afterwards they both have the experience of seeing another guy "over there" after the duplication, seeing him with their own eyeballs, talking to him with their own mouth, and being unable to move the other guy's mouth. I guess you would say "yeah, they would both have the same sort of experience, but the guy made out of a new set of atoms just has false memories of a continuous stream of experiences extending back to before the duplication, while the pre-duplication memories of the guy made out of the same set of atoms are true". But that's just a metaphysical view you've taken, it can't be justified by pointing to the direct experience of the guy made out of original atoms, because the direct experience of the guy made out of new atoms is qualitatively identical (seeing a separate guy over there with his own eyes etc.)

            "I may not fully understand the physics of personal identity, but that doesn't mean it isn't a real thing."

            The "physics of personal identity" definitely isn't a real thing, it's a phrase you just made up which I'm sure would have no meaning to a physicist. Modern physics is wholly reductionist, with the behavior of any composite system being determined by the arrangement and interactions of its constituent particles and fields; these interactions are governed by low-level laws, there are no new high-level physical laws (including any dealing with the 'identity' of macroscopic objects made out of huge collections of particles) that are more than just a statistical consequence of the low-level laws operating on large systems. So if there are any objective truths about personal identity, they can't be based on physics as we know it, you'd either have to posit unknown laws of physics which are radically different than known laws, or you could posit that objective truths about identity are determined by metaphysical "laws of consciousness" of some kind. (Positing such metaphysical laws wouldn't necessarily require accepting the "interactive dualist" position that the mind can influence the physical world, you could instead posit that the outward behavior of physical systems is entirely determined by physical laws, and these new laws of consciousness deal only with the relation between configurations of matter and subjective experiences--this is the sort of idea posited by the philosopher David Chalmers, for example.)

            If such metaphysical laws of consciousness exist, I see no reason to rule out the possibility that they allow for "forking" of identity. Given the premise of metaphysical laws governing the stream of consciousness, the notion of forking identities would seem particularly natural in the many-worlds interpretation of quantum mechanics where each of us is splitting into multiple possible versions of ourselves constantly due to quantum effects (and this interpretation is favored by many physicists, since it avoids the awkward notion that the quantum wavefunction for a system collapses each time the system is measured).

          • smoochie

            That's a lot of text, but it still comes down to the fact that we all know who "me" is, and I don't care how identical a copy of my mind is, it still isn't "me". If I'm not experiencing it, it just doesn't count. Whatever consciousness is, it is singular and non-transferable. I've seen nothing here to indicate otherwise. Yes, the copy thinks it's me, as I said, and perhaps it doesn't really matter in the end--to anyone else. But it would matter to me--that is, original me.

          • hypnosifl

            That's a lot of text, but it still comes down to the fact that we all know who "me" is

            No, "we" don't all know that. You claim to know it, but only because you use the question begging-strategy of labeling the version made out of the original atoms as "me", and only considering what things look like from his point of view. You haven't actually given any type of argument for why it would be objectively incorrect for me to say that I am the same "me" as before the duplication, if afterwards I find myself in the duplication chamber made out of a new set of atoms--I still remember experiences from before the duplication, and the "if I'm not experiencing it, it just doesn't count" argument obviously doesn't work, since I am experiencing being the guy made out of new atoms, and am not sharing in the post-duplication experiences of the guy made out of the original atoms. It seems like all you have here is a totally circular argument--"I assume that the only guy who is 'me' after the duplication is the guy made out of the original atoms, so I'll only consider things from his point of view, and I'll use that point of view to prove that he's the only one who is 'me' after the duplication."

            "Common sense" arguments based on gut feelings about what's "obvious" are not a substitute for actual careful reasoning about a subject--many people's "common sense" tells them that humans could not have evolved from fishy ancestors, that their bodies can't just be large collections of atoms, that space itself cannot have curvature, and so forth. If you want to continue the discussion, I'd appreciate if you actually address the arguments rather than just dismissing them as a "lot of text" (a whopping 5 paragraphs, less than a page of a typical book) and then just restating your original assertions. For example, do you disagree with my argument that modern reductionist physics has nothing to say about the "identity" of macro objects composed of large numbers of particles, and that any objective truth about personal identity and consciousness would require either totally new kinds of physical laws, or extra "metaphysical" laws of some kind?

          • G

            You go into a duplication chamber, and two of you emerge. No external observer can tell which is the "original" and which is the "copy."

            You-A and You-B go into different rooms and each is shown a randomly-selected picture, and writes down the description of the picture they see, and also a description of the picture the person in the other room sees. This test is repeated a number of times.

            Do the descriptions match? That is, does You-A correctly describe every picture that You-B has seen?

            Here we're not talking about ordinary psi activity, which is a persistent but fairly low-level effect (and probably will end up being explained in materialistic terms, as nonlocal interactions with brains at one or both ends). Instead we're talking about a 100% hit rate: You-A knows everything You-B has seen, and vice-versa.

            If You-A and You-B have a 100% hit rate on each other's pictures, then you might have a reasonable claim of eternal life via copying. But if not, then when either of you dies, you're still dead, and the other one gets to come to the funeral.

          • hypnosifl

            As I said in another response, I don't imagine there would be any sharing of experiences after a "fork", so You-A would do no better than a stranger at guessing what You-B is seeing. Like Chalmers I am assuming that all measurable physical events can be explained in terms of purely physical causes, so unless you specifically installed a physical information channel between two simulated brains there's no reason new information perceived by one after the fork should be available to the other. You do suggest that perhaps "psi" could be explained in physical terms using quantum nonlocality, but as with out-of-body-experiences I've never seen any good evidence of psi effects (sometimes tests for psi claim to show statistical deviations from chance, but I think this is likely due to some combination of bad statistical analysis involving post-hoc choices of what result to estimate the probability of, along with the "file-drawer effect" where the majority of tests that yield null results aren't published, so the published studies are a biased sample--see http://www.csicop.org/si/show/heads_i_win_tails_you_loser_how_parapsychologists_nullify_null_results for a discussion of both issues in psi research).

            And as for quantum nonlocality, it's been proven theoretically that according to the known rules of quantum field theory, it should be impossible to use entanglement to transmit information in a nonlocal way, an abstract of the proof can be found at http://link.springer.com/article/10.1007%2FBF00696109 Also, the apparently "nonlocal" effects of entanglement can be explained in a purely local way by the many-worlds interpretation of quantum mechanics. (An implicit assumption in Bell's derivation of nonlocality was that each measurement yields a unique result, which isn't true in the many-worlds interpretation where a measurement will cause the experimenter to split into multiple versions observing different results, and the universe doesn't have to decide which version of experimenter #1 gets put in the same "world" as any given version of experimenter #2 until there's been time for a signal moving at the speed of light to get from one to the other. For more on this point see the various papers referenced on p. 2 of the paper at http://arxiv.org/pdf/quant-ph/0103079v2.pdf )

          • G

            If there's no sharing of experiences after a fork, then there's no basis for a belief in immortalism via cloning, uploading, or other forms of magic in the guise of technology, so about that we agree.

            Re. psi, for the file-drawer effect to be correct, there would have to be 3,800 studies showing nonsignificant outcomes for every study that has shown a significant outcome. There's no way that there's that much psi research going on.

            Yes, I'm well aware that entanglement does not produce communication. When I first read the piece in Scientific American about 20 years ago, the analogy that immediately came to mind was this:

            You can send a cyphertext at instantaneous speed, but you can only send the decryption key at c or below. Thus the recipient has "the information" instantly, but they can't decrypt it until they get the key.

            And that also perfectly describes psi activity. You try to remote-view a random target, and you write down your impressions; but you don't know if you got a hit until you observe the target via normal means.

            I'm quite certain that there will be a materialist explanation for psi, based on entanglement, and that will put the whole thing to rest as just another interesting but ultimately unsurprising aspect of brain functioning.

            That said, there's a reputable physicist who has been writing about nonlocal signalling, though I've forgotten his name at the moment; in any case his theory is a minority position.

            As for the many-worlds interpretation, the only way that could possibly work is if only the objects whose wave functions collapsed were to split: not the entire universe splitting every time some part of it had a wave function collapse. But none the less one must ask, whence comes the energy for splitting even the infinitesimals? And lastly, multiverse theories are notoriously unfalsifiable, which to my mind is sufficient reason for scepticism. (Though of course I keep an ear open for news on the subject, since after all I might be wrong.)

          • hypnosifl

            "If there's no sharing of experiences after a fork, then there's no basis for a belief in immortalism via cloning, uploading, or other forms of magic in the guise of technology, so about that we agree."

            As I said, it seems plausible that a theory of consciousness which based continuity of consciousness on continuity of information/pattern would say that you have a 100% chance of "becoming" the upload in the case of destructive uploading, and you didn't respond to my question about why you're so confident that the consciousness associated with the organic brain would be extinguished or go to some sort of "afterlife" in this case. I don't bring up this possibility because I'm advertising for "immortalism" (though I wonder if you are so focused on fighting this idea because it threatens your worldview about the afterlife in some way), just because it seems the most natural consequence of information/pattern based continuity, and you've asserted this idea is wrong without giving any sort of argument.

            "Re. psi, for the file-drawer effect to be correct, there would have to be 3,800 studies showing nonsignificant outcomes for every study that has shown a significant outcome. There's no way that there's that much psi research going on."

            What analysis does that figure of a 3800:1 ratio come from? Since psi research is often plagued by poor statistical analysis, how do you know that figure isn't itself an example of this? Have you verified the analysis yourself?

            "Yes, I'm well aware that entanglement does not produce communication. When I first read the piece in Scientific American about 20 years ago, the analogy that immediately came to mind was this:You can send a cyphertext at instantaneous speed, but you can only send the decryption key at c or below. Thus the recipient has "the information" instantly, but they can't decrypt it until they get the key."

            No, that analogy doesn't work--you are confusing "information" with some notion of "meaning", I think. Information is defined in an abstract way that has nothing to do with the meaning of the message being transmitted. A transfer of information occurs in any case where the sender can ensure that the recipient gets a specific string of symbols (usually 1's and 0's), regardless of whether that string is decodable as some sort of meaningful message or if it's encoded and has no meaning for the recipient until they get the key, or even if it is nothing but a totally random string that the sender decided to transmit. And information is transmitted even if the sender doesn't have total control over the string the sender receives, but can just statistically bias it in certain ways. The proof I referred to above shows that no such transfer of information is possible using quantum entanglement--there will be absolutely no statistical correlation between any choices the first experimenter might make in how to measure their particles and the results measured by the second experimenter.

            This would include the types of statistical correlations claimed in ganzfeld research, where it's claimed that the receiver is more likely to see certain kinds of images that are associatively linked to the image the sender is concentrating on. This obviously gives rise to the problem of post hoc analysis, where knowing the sender's image can lead you to invent connections to what the receiver reported in retrospect. Any decent experimental design for a ganzfeld experiment needs to avoid this problem somehow, perhaps by having the researchers who score the tests be given a collection of several sender images and a collection of several receiver reports, with the researchers being "blinded" to which senders had been paired with which receivers until after they did the scoring. An alternative might be to have the scoring be done by a computer with a pre-stored database of linked words.

            Either way, if the tests were consistently successful this would mean there *is* some sort of statistical correlation between the choice made by the sender and the words written down by the receiver, and the proof I mentioned shows that entanglement does not allow for any such statistical correlations between the choices of one experimenter and the observations of the other. (There is a correlation between the observations of each experimenter, but the correlations are between individually random strings of data observed by each experimenter, for example if the first string was 101101010001 then the second string could just have the 1's and 0's reversed, i.e. 010010101110...there is no way for either experimenter to bias the strings in a way that allows them to make specific desired patterns of 1's and 0's more likely to appear.)

          • G

            Re.100% chance of becoming the upload: No more so than if you copy a document on a scanner and then shred the original. The obvious question is, by what agency does your forward-continuity of experience transfer to the copy?

            Either a) the information is destroyed when the biological brain decays, or b) the information maintains some kind of coherent entanglement but dissipates into a background sea of ambient information ("raindrop into the ocean" analogy), or c) the information maintains coherent entanglement and overcomes dissipation to transfer intact to the target device. The latter requires a mechanism, such as an agent with relevant capabilities, to overcome dissipation.

            No, I'm not promoting an afterlife either. I'm agnostic on the issue, given the apparent conflict between a) the material monist theory of mind, with ample empirical support and canonical status, and b) clinical reports from NDEs that are highly consistent and appear to contradict (a). However I'm inclined to believe that the apparent conflict will be solved with sufficient research. For one thing, there was a finding to the effect that high-frequency gamma EEG was measured in dying patients at a point after cessation of heart activity, and it may be that the NDE correlates with this activity.

            Nonlocal "information": You assumed erroneously that I meant that the original plaintext was human-meaningful. The original plaintext could be a string of bits from a random number generator; what matters is that the output of decypherment matches the original. The recipient can't decrypt until the sender informs the recipient of the keystream (the series of manipulations of the polarizer at the sending end), via local means (at c or below).

            The blinding you propose for Ganzfeld research is exactly what was done, going back to Krippner et. al. in the 1970s. Target image is 1/N images, subject seeks to remote-view target and writes or draws result, blind scorers attempt to match subject's outputs to images.

            Re. "signal nonlocality," the name I was looking for is Antony Valentini, who derived the idea from the DeBrogli-Bohm theory, which is a "causal" rather than "acausal" theory of QM, though Bohm did not agree with the deterministic interpretation.

          • Gaga

            If it happens eventually that the brain can be interfaced with by electronic components that are able to mimic the way in which the human brain works so as to offer a bypass to damaged areas for example - to help the blind see or the lame walk - and that the artificial parts are able to learn from the real parts including sharing in thoughts and memory then one would assume the electronic parts, being a part of the whole, are becoming aware of self and thus playing a part in the life of the conscious being. Now, at some point the human brain dies while the electronic parts remain functional. Who is to say, given the way they have been designed to work, the electronic parts can not retain memory of self and thus, the original human state of consciousness?

            I kept wondering how the human mind could be downloaded to an electronic brain and it seemed to me that a true copy could only be obtained by integrating the two together and for the electronic part to learn over quite a long time.

          • G

            That still doesn't work, any more than a prosthetic limb continues to live after a person has died. During the person's life, s/he experiences the use of the limb. After the person's death, they're still dead, even though the limb is still functional and could be used by someone else.

            The subjective experience of someone with electronics plugged into their brain would be analogous: at first they would have the godlike sense of being plugged into the whole world's knowledge, and sharing thought with everyone else who is also plugged in. But as their biological brain ages and loses capacity, they will still lose capacity: like the person with the prosthetic limb that retains its hydraulic muscular strength but loses coordination when the person using it gets drunk. The person with the brain-prosthesis will find themselves plugged in but going senile none the less.

            Eventually the parts of their brain responsible for consciousness will cease to function, and at that point they will be dead. The prosthetic implants may continue to function, but their experience will either have shifted to a hereafter or will have ceased.

          • TypicalMoron

            I think this is what matters most.

            We could have a virtual universe of every single living person from a set date.

            But the "me" person is still going to die, and I'm not going to hasten it just so there's a copy of me running around. Assuming most people aren't going to commit suicide just to make a copy of themselves, then this whole process is going to be moot in any meaningful sense to the people who want to be able to "upload their brain into a computer".

            It won't be "you". Are you ready to die so that a copy gets to do all the fun stuff inside a computer? No?

            Next.

            You either?

            Great though experiment, terrible idea in practice.

          • SaintMarx

            The assumption that all reality can be reduced to laws of physics is just that - an assumption. Until and if physics can actually succeed in such a reduction, the materialist presumption is just an article of faith.

          • G

            I've read Chalmers in depth and agree with him to a very large extent. From what I know of Pinker I'd say I probably agree with a small amount of what he's written and disagree with the large majority.

            But neither Chalmers' interactionism nor Pinker's computational theories gets you eternal life at the altar of the computer god. Even if your hypothetical forking of identity was true, the physical person on each side of the fork will eventually die, and the mind that resides in that person's brain will either go into some kind of afterlife or will cease to exist.

            The many-worlds interpretation of QM is presently unfalsifiable, so locating eternal life there is presently still an exercise of faith rather than science.

            Here I should mention I'm not a-priori averse to the dualistic version of interactionism (and to the more-or-less conventional hereafters it implies). The data from NDEs (near-death experiences) demonstrate self-aware and state-aware lucid consciousness in persons who are under the influence of surgical anaesthetics and sedatives that ordinarily produce the opposite effect, of complete loss of consciousness. That there could be sudden and brief paradoxical effects of complete lucid consciousness during the duration of action of any such drugs, is a testable hypothesis and highly unlikely at best. In any case all that's needed to test the "hereafter hypothesis" is to correlate measurable brain activity with time, and reported NDE with time, to ascertain if any of the NDE overlapped with a period when brain activity was flatlined.

          • hypnosifl

            "Even if your hypothetical forking of identity was true, the physical person on each side of the fork will eventually die, and the mind that resides in that person's brain will either go into some kind of afterlife or will cease to exist."

            Sure, but before the fork, "you" (understood as a stream of consciousness) would have a 50/50 shot of either ending up as a biological human or ending up as an upload. The focus of my argument was never on advocating uploading as any sort of guarantee of "eternal life", just on the metaphysical question of personal identity and whether an upload's memories from before the brain-scanning would be "false" ones.

            "Here I should mention I'm not a-priori averse to the dualistic version of interactionism (and to the more-or-less conventional hereafters it implies). The data from NDEs (near-death experiences) demonstrate self-aware and state-aware lucid consciousness in persons who are under the influence of surgical anaesthetics and sedatives that ordinarily produce the opposite effect, of complete loss of consciousness."

            I've never seen any convincing evidence that NDEs are anything other than the activity of a dying brain, see for example the research discussed at http://io9.com/a-new-scientific-explanation-for-near-death-experiences-1110395345 and http://www.npr.org/blogs/health/2013/08/12/211324316/brains-of-dying-rats-yield-clues-about-near-death-experiences

          • G

            In order for there to be a 50/50 probability of "you" ending up in the machine, the transfer of a mind outside of a biological brain must first be possible, in other words, reincarnation. Otherwise there's a 100% probability that "you" will still be stuck inside your brain.

            Of course NDEs are activity that in some manner occurs or is recorded in a dying brain. Otherwise the person would have no memory of the experience. The problem is, they occur to individuals whose brains are in no condition to support any kind of consciousness whatsoever, for example under the full influence of combinations of surgical anaesthetics plus narcotic analgesics. Any assertion that a brain can somehow overcome the effects of those drugs and exhibit lucid consciousness, is every bit as much an endorsement of substance dualism, as the assertion that the mind has somehow persisted while the functioning of the brain has been shut down. In fact it's the same thing.

            Either way you end up with dualism. And the explanation of "emergence reactions" and "paradoxical reactions to the drugs" and so on, does not provide a mechanism: if anything, it's an endorsement of the position that a lucid conscious mind can operate under conditions that should make it impossible.

          • hypnosifl

            "In order for there to be a 50/50 probability of "you" ending up in the machine, the transfer of a mind outside of a biological brain must first be possible, in other words, reincarnation. Otherwise there's a 100% probability that "you" will still be stuck inside your brain."

            I think the term "reincarnation" would be a bit misleading since it ordinarily refers to some transfer of identity between different personalities with different life experiences. But yes, in the type of theory of consciousness I'm talking about, a subjective stream of consciousness would be able to jump from one physical computing system to another as long as there was the correct sort of information/pattern continuity between the two systems.

            "Of course NDEs are activity that in some manner occurs or is recorded in a dying brain. Otherwise the person would have no memory of the experience. The problem is, they occur to individuals whose brains are in no condition to support any kind of consciousness whatsoever, for example under the full influence of combinations of surgical anaesthetics plus narcotic analgesics. Any assertion that a brain can somehow overcome the effects of those drugs and exhibit lucid consciousness, is every bit as much an endorsement of substance dualism, as the assertion that the mind has somehow persisted while the functioning of the brain has been shut down. In fact it's the same thing."

            I think you overestimate how well the operation of anesthetics is currently understood--they are used because they have been found to work in practice, not because we have any detailed theoretical model showing that they must always halt or drastically cut back on the communication between neurons that gives rise to consciousness, so the claim that "the functioning of the brain has been shut down" is not justifiable in terms of present science.

            More evidence that anesthetics do not shut off brain activity 100% of the time can be found in the fact that a tiny fraction of people given general anesthesia during surgery later report that they were aware but paralyzed during the procedure (just ordinary sensory awareness, with none of the less mundane experiences associated with NDEs), see https://en.wikipedia.org/wiki/Anesthesia_awareness ...also see the technical article at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2175003/ which notes in the opening paragraph that "the mechanisms of action of anesthetic drugs remain poorly understood". Also, surgery patients don't generally have their brain activity monitored throughout the procedure, and NDEs and anesthesia awareness are both rare, so I don't know of any cases where there's an actual record of a patient's brain activity during such an experience.

          • G

            Paradoxical drug reactions to anaesthetics are almost certainly a small percentage of patients compared to NDEs. The numbers should be widely available, so a basic statistical analysis (two-tailed T test) should be able to assess significance of the difference between groups. (Let's please not get into the "frequentist vs. Bayesian" debate here;-)

            However, Stuart Hameroff has proposed an experiment that could test one of the mechanisms of anaesthetic action. This follows on recent findings from Bandyopadhyay et.al. that microtubules in-vitro become AC-conductive at high frequencies that support the "orchestrated objective-reduction" theory of QM computation in neurons. Per Hameroff, take the same experimental setup and expose the microtubules to anaesthetic gases at various concentrations known to correlate with degrees of anaesthesia in humans: the result should be a proportional shutdown of the observed AC conductivity, and resumption of that activity after the gases are cleared from the apparatus.

            Also per Hameroff, monitoring of surgical patients' brain activity is fairly common in the USA. There is a widely-used medical instrument that interprets EEG activity into a standardised numerical scale that the anaesthesiologist uses to adjust relative dose levels of different drugs the patient receives during surgery.

            That doesn't get us a mechanism for NDEs, but the observed high-frequency gamma EEG following cessation of heart activity in dying patients might. How that high gamma activity compares to similar activity in humans under other conditions (high-concentration task performance, meditation, etc.) remains to be seen.

          • Madalina Boangiu

            yes, but at least "you" have the opportunity to rethink things. It's an experience that must be "lived"

          • G

            Nicely said. BTW, I do understand some of the theories of the physics, neurochemistry, etc., of consciousness, and they get you to the same place: the copy isn't you, and when you die, you die.

          • SaintMarx

            "...notions like personal identity and consciousness don't figure into any of the fundamental laws of physics."

            …which is why the fundamental laws of physics do not provide a complete description of reality. We cannot just assume material reductionism to be true.

          • hypnosifl

            Well, of course I acknowledge that's possible, why do you think I spent so much time talking about David Chalmers' ideas? But I do think there's a lot of evidence to support the idea that reductionism is correct at the purely behavioral level (and Chalmers would agree), meaning the external behavior of any physical system (including things like human speech acts) can in principle be explained in terms of the arrangement of the more basic parts it's made up of and the rules governing their interactions.

            Anyway, once you assume extra metaphysical laws dealing with consciousness and identity, what makes you so confident you know how these laws would work? In another comment you say that "This only reproduces a continuity of patterns, not a continuity of a specific consciousness", but as I already said in other comments, it seems at least possible that the laws of consciousness would have the following properties:

            1. Subjective experience can "fork", so a single consciousness can split into two that are both extensions of the original stream of consciousness before the fork.

            2. Continuity of consciousness depends only on continuity of pattern/information. Combined with #1, this would imply that if a snapshot of your brain is taken and used to create a mind upload, the stream of consciousness would split at the moment of the snapshot, with one branch remaining with the original brain and one going into the simulation.

            If you feel confident consciousness wouldn't work that way, is that just based on a strong personal hunch or do you have some reasoned argument against the possibility?

          • http://thewayitis.info/ Derek Roche

            Nice work, Hypno. One only has to think about bacterial reproduction to see the truth in what you're saying. Parent and offspring have the same identity. The same applies to embryonic development. We ourselves are identical in some sense to the single fertilized cell we began as. I'm wondering if you have any further thoughts on the ontological status of the "psychophysical laws" you and Chalmers hypothesize? (I do).

          • hypnosifl

            Well, what are you thinking of when you talk about the ontological status of the laws? (as distinct from the ontological status of consciousness and conscious experiences, presumably) I'm always a little unsure about whether it makes sense to talk about the "existence" of abstract things like numbers or laws of nature--I do think psychophysical laws would need to have the same ontological status as physical laws, and I think there are objective truths about physical laws (for example, truths about which logically possible physical behaviors are forbidden and which are allowed), but I'm not sure if that means that physical laws "exist" separate from the phenomena they describe (and I'm also not sure if this is any more than a matter of how we choose to define the word 'exist'). Some philosophers do seem to equate "there are objective truths about X" with "X exists", but the Stanford Encyclopedia of Philosophy article Platonism in the Philosophy of Mathematics distinguishes "truth-value realism" from mathematical Platonism, indicating that the article's author thinks it's possible to believe that mathematical statements can be objectively true or false without this implying that mathematical objects "exist".

          • http://thewayitis.info/ Derek Roche

            OK, one step at a time then. What are your thoughts on the ontological status of the metaphysical component of the "psychophysics" whose "laws" are being hypothesized? This was my frustration in reading Chalmers. He makes a convincing case that the so-called "hard problem" of consciousness resists objective explanations but offers no hint of a way to resolve it.

          • hypnosifl

            What do you mean by "the metaphysical component"? As opposed to a non-metaphysical component? And I think Chalmers' postulate of psychophysical laws was itself intended to be a philosophical resolution to the hard problem (he only said it was unresolvable if you limited yourself to objective descriptions of brain function), in the sense that it would determine the relation between the physical world and subjective qualia. The phrase "the physical world" should be taken with a grain of salt here though, since he also suggested the possibility of a version of naturalistic panpsychism in which all the patterns of events we call "physical" are associated with subjective experiences of some kind (even the interactions of molecules in a box of gas, for example). So although there'd be no need to reject the equations we know of as the "laws of physics", they'd be reinterpreted as dealing with some sort of "objective", information-based descriptions of mental patterns which also have a "subjective", qualia-like side.

          • http://thewayitis.info/ Derek Roche

            Thanks for the link, Hypno (or should that be Dave?). That's a formidable piece of writing, is it yours? Intersubjectively speaking, the version of naturalistic panpsychism I'd prefer would be restricted to the patterns of events we call "organic" or, at a stretch, "holistic" and it would be a function of the self-referential logic of a self-generated world, one of which the laws of physics can be shown to be symptoms, not causes. Here's a link to my work, if you're interested.

          • G

            If panpsychism is true, it provides a potential resolution to the problem of what happens when you die:

            The information contained in your brain diffuses into the larger sphere of information held by mind at-large.

            Subjectively this would be like the experience of a drop of water falling from a rain cloud (being born and living) and then splashing into the ocean (dying and losing its individual existence as it merged with the greater whole). (In which case why would a raindrop prefer to be stored in a glass bottle?;-)

          • Mr.Bill

            Would this mean that both sets of sensory and cognitive data (from both brains) would now be cognized within one field of consciousness? Perhaps like a dual monitor setup? If so, this would seem to mean that this field of consciousness in which both sets of experience were presented is somehow non-local and not limited to a physical body, i.e. that there's some kind of superspace in which a single awareness experiences the sensory input of two brains. Such a position seems like a hard swallow for materialist reductionism.

            If one were instead to posit that this "dual monitor" system actually occurs to one brain, so to speak, or that this "fork" results in two discrete sets of experience, so that the original "stream of consciousness" does not have access to both sets of input, then the question remains as to what happens to the consciousness experiencing the input from the mortal, dying brain. It may be nice for consciousness A1 to know that there's another consciousness A2 experiencing something very similar to its typical patterns of cognition, etc., but this wouldn't change the fact that when the brain/body from which consciousness A1's experience arose ceased, so would it. Such a situation would really be little different from having a child. In so doing, there's a sort of perpetuation of genetic pattern and, likely, cognitive habits, but if someone blows your brains out, it's lights out, even if you've sired more offspring than Genghis Khan.

          • hypnosifl

            No, if there were somehow a single consciousness that perceived the contents of both brains, that wouldn't be a "fork" in which a single stream of consciousness splits into two different streams of consciousness with their own distinct perceptions, which is what I meant with premise #1 above. Also, if you start from Chalmers' premise that physical states give rise to experiences but the physical states themselves aren't causally influenced by anything nonphysical (so that "reductionism is correct at the purely behavioral level" as I suggested above), then it would seem to conflict with the spirit of this premise to imagine some non-local consciousness can integrate the information in two distinct brains when there's no physical information-sharing between the two brains. For example, this premise would indicate that if you put the two duplicates in isolated rooms, behaviorally they won't exhibit any knowledge of things that have happened to the other version since the duplication.

          • Mr.Bill

            So then you seem to be positing what I summarize in the second paragraph of my post, which strikes me as also problematic, here vis-a-vis immortality. If they are distinct entities, and the original brain that's been copied has no way to access the cognitive input of the immortal copy and vice versa, then when the mortal, original brain ceases to function, so too does its concomitant sense of experience. A copy may continue functioning, just a like a clone or offspring would, but for the consciousness associated with the original brain, that brain's death would result in the annihilation of experience, as there would be no way for it to continue sharing in the experience of the immortal brain,

          • hypnosifl

            But if "forking" is possible, it doesn't make sense to say "the consciousness associated with the original brain" as if that phrase refers to a single consciousness. There would actually be two streams of consciousness, one that had been "associated with the original brain" up until the moment of the fork at which point it became associated with the physical copy, and another that remained associated with the original brain both before and after the fork. So if your experience is associated with the original brain before the copying, you know there is some chance your experience will fork into the physical copy and some chance it will remain associated with the original brain--the exact probabilities would probably depend on the details of the metaphysical "laws of consciousness", but if these laws determine continuity just by pattern/information, and the pattern of the copy the moment after it's made is close to identical to the pattern of the original brain, it seems plausible the odds would be something close to 50/50. So from a subjective point of view, thinking about the experiment beforehand, you could at least expect a 50/50 shot you'll soon find yourself as a near-immortal machine.

            Then there's also the question of how the odds would work in the case of a "destructive upload"--for example, the original brain might be frozen in liquid nitrogen and then sliced up into ultrathin sections by a laser, with each section scanned and used to build a simulation of the complete brain in a computer. In this case would you have a 50% chance of waking up as an upload and a 50% chance of dying, or would it be more like a 100% chance of waking up as an upload? I think that if you accept the possibility of forking in the original case, and you also accept the premise of continuity of consciousness depending only on pattern continuity, then there's a good case to be made for a 100% chance.

            I'll give you a little thought-experiment to try to argue for the 100% chance here. Instead of uploading, suppose I step into a physical duplication machine that scans my atoms and builds an exact copy of me (or near-exact, with only irrelevant microscopic differences in the precise positions of individual atoms). If you accept forking and pattern continuity, I should expect a 50% chance of finding my stream of consciousness remaining with the "original" and a 50% chance of it becoming associated with the "copy". But now suppose that although I am scanned, someone forgot to plug in the duplication chamber so it doesn't produce anything. Do I have a 50% chance of "dying" here because my consciousness forks and one part "tries" to go to the duplication chamber but finds nothing there? That doesn't seem to make sense, the mere failed intention to create a duplicate shouldn't affect the probability, and if continuity of consciousness depends on continuity of pattern, then forking should only happen when two or more near-identical patterns arise when there was previously only one, and no such pattern exists in the empty chamber. But the same argument suggests that if the duplication chamber malfunctions and produces a "dead" copy of me--say, one without a head, or one whose temperature has dropped to that of liquid nitrogen--then no "forking" would happen here either, and I'd still have a 100% chance of having my stream of consciousness remain associated with the sole functioning physical pattern.

            Of course you can only get to this conclusion given several other premises which could easily be false: the premise of objective metaphysical laws governing consciousness that go beyond purely physical laws, the premise that these laws would depend on some type of pattern continuity rather than other types, and the premise that they would allow for "forking". But given all these, I think the case for a 100% chance of your consciousness "surviving" a destructive uploading (assuming no errors on the physical level of scanning and simulating) is pretty good.

          • Mr.Bill

            Perhaps the rub here is accepting "the premise of continuity of consciousness depending only on pattern continuity." This premise would seem to play out as follows: if a set of neuro-cognitive patterns is replicated perfectly, then the associated consciousness has been replicated perfectly, as well. Viewing the question of forking consciousness from the outside in, so to speak, this is likely all we can say. As a rigorous materialist, one must put the matter, as it were, in empirically observable terms verifiable by a third party in a repeatable experiment. So, if you make two precisely identical cognitive apparatuses or, you know, brains, you've made two identical consciousnesses. If one of these brains happens to made of near indestructible material, or at least something that won't break down after four score and twenty, then you've made that consciousness basically immortal, so the premise might go.

            The problem, however, arises when one views the replication of pattern from the inside out, from the perspective of the experiencing subject. Here, consciousness is not just patterns running off, but is also someone or something is aware of those patterns as objects of cognition, of awareness. Indeed, the question of identity--around which this whole debate revolves--the question of who one is after the fork seems to be a question of which brain's patterns one is aware of. If I'm aware of the sensory and cognitive input of the immortal brain, then I'm immortal. If I'm aware of the old meatspace brain, then I'm still going to die. You're either one or the other.

            What's implicit in your supposition of the odds that consciousness will "go" somewhere is that this awareness is nonlocal, that it can leap from brain to brain, body to body, and is somehow beckoned to a new brain by the creation of an exact copy of itself. To me, this seems like a tortured attempt to cram the subjective notion of consciousness into the terms of the objective notion of consciousness. It seems far more likely that if someone created a precise copy of my brain, the awareness currently associated with this body wouldn't magically, for lack of a better term, have access to the sensorial and cognitive input of that brain, but instead would perceive the brain as anyone else would--a thing floating in a tank somewhere. That brain might have all of my memories, thoughts, desires, and so forth, but they would be inaccessible to this awareness, which would still be associated with this body, which would still be subject (pardon the term) to disintegration and death.

            To digress a bit, this awareness is the one thing rational empiricism can't get a hold of, for it never presents itself as an object of study. It is, instead, the means of study, the means of knowing. As soon as you claim awareness is somewhere as an object, the question immediately arises, "what's aware of said object?" The closest one can get is to examine the processes most closely associated with this awareness, and limit one's definitions to them. We shouldn't mistake such definitions as comprehensive models of reality, however. Indeed, such definitions would seem quite hollow if our new immortal copies were presented to us with the good news that even though we're going to die in a few decades, those guys over there are going to live forever. A good-old fashioned statue would bring about as much succor.

          • http://thewayitis.info/ Derek Roche

            Someone else made the valid point that if it's patterns that define consciousness, they are dynamic patterns, requiring constant feedback from the embodiment to maintain, so that a frozen slice in time would not necessarily continue the same way in a new embodiment. Unless that new embodiment was itself a feedback system, in silico or in utero, there would be no continuity at all.

          • G

            Hear, hear!. Well said. Excellent in fact.

            See also my comments about the impossibility of "taking a snapshot of the brain" without destroying the quantum and classical information contained in the neurons. If one can't take a snapshot, one can't make a copy. Now what are we going to do about deprogramming all these eager members of the Church of Singularitology?;-)

          • G

            No, and see my comment above about the OBE thought-experiment.

            The copy will have "backward continuity" and remember your life. The original you, sliced up in liquid nitrogen, will not have "forward continuity," it will die and either go into an afterlife or cease to exist.

            People who are afraid of an afterlife should take their moral inventories and change their lives. People who are afraid of nothingness should practice the contemplation of nothingness, and seek to achieve the state of cessation of thought during meditation.

          • hypnosifl

            'The copy will have "backward continuity" and remember your life. The original you, sliced up in liquid nitrogen, will not have "forward continuity," it will die and either go into an afterlife or cease to exist.'

            How do you know that, exactly? My suggestion was that in a theory of consciousness of the type Chalmers suggests, there might be objective truths about continuity of consciousness (an issue Chalmers doesn't discuss), and the theory might say that subjective continuity is determined by continuity of information or pattern on the physical level, rather by continuity of specific atoms (as I mentioned, the atoms your body is made of today are mostly different than the ones it was made of a few years ago). If that were true, there would seem to be such continuity of information/pattern between the biological brain immediately before being frozen and the upload immediately after the simulation begins to run. Do you have any reason to be certain a priori that the true theory of consciousness wouldn't work this way?

            I agree, incidentally, that it's not psychologically healthy to grasp at mind uploading to dispel anxiety about death, I have not been advocating such an idea.

          • G

            As it turns out, the methods used to attempt to preserve brains, each have deleterious effects on brain tissues and on mechanisms that are postulated for storage of information in brains. No existing method preserves both the integrity of the cells and the integrity of the microtubules.

            "Pattern" is equivalent to "configuration of bits," and there is no thermodynamic privilege or penalty for any given configuration as compared to any other. For which reason I say that "meaning is orthogonal to thermodynamics," in that it takes the same quantity of electrical energy to transmit a file of bits regardless of whether the content is human-meaningful or not.

            So with that, yes it might in theory be possible for a configuration to be replicated from one platform to another; but the subjective existence that resided on the first platform would then be separate from that on the second. But as a practical matter there's no way, within current physics, to read out the data from a brain.

            Good that we agree that Uploadism isn't a viable solution to anxiety about death. I wasn't so much thinking that you were advocating it, as that the article itself is doing so. There are many famous Silicon Valley bigwigs (Sergey Brin, Mark Zuckerberg, Larry Ellison, among others) who are strong believers; they are promoting their beliefs in various ways, and IMHO what they are promoting is equivalent to homeopathy and other medical quackery: Power Placebos for the brain;-)

          • G

            Right, which is why praying for eternal life at the altar of the Computer God, is a pointless exercise.

            Though, here's a wild thought-experiment for you.

            Train two individuals to induce out-of-body experiences (OBEs) with reasonable reliability (Monroe et. al.). Then place them in separate rooms and have each person leave their own body and try to enter the other's. After they return, let them describe elements of each others' life histories, and check each for accuracy.

            If each person accurately retrieves memories from the other, then the substance dualist hypothesis is supported: the mind is separable from the brain; the brain stores memory. If this is true, one of the hard theoretical objections to upload, duplication, etc., is removed.

          • Jay R

            I'm not an expert or even competent on the matter. I've tried to read and understand Wider than the Sky, but mostly failed. But, these comments have at least helped me understand how this might be possible. So I appreciate the time spent.

            At first I had the same reaction as many, that two copies do not equal one whole... or at best, the break of continuity would mean you and the copy exist separately (you're not aware of the copy, paths diverge, etc).

            But I think a lot of this thinking is largely based on the idea that we have some sort of soul, or other internal being that exists outside of science.

            If you believe consciousness and self-awareness are purely defined by science, then it does make sense that you would continue to exist in this scenario. Much like restoring a backup of a hard drive, all the data and history is still there.

          • Jay R

            PS - I think I'd still rather die. Normal life is scary enough, I wouldn't want to spend my virtual afterlife being terrified of immediate termination via power failure or data corruption.

          • Michael Hanlon

            Immediate termination is a long way from being the worst thing that could happen to you in a virtual afterlife.

          • G

            How, exactly, do you propose "taking a snapshot of your brain"? See my comment above about this: it's not possible without collapsing quantum state vectors and destroying classical information-bearing molecules along the way. Burden of proof is on you to demonstrate that it's possible, and if you do, I'll personally call the Nobel committee and send them a link to whatever you publish about that in a peer-reviewed journal.

            If you can "fork" subjective experience, then in order for there to be forward between both minds, rather than the "you die, your clone lives" situation, you will first have to demonstrate a 100% hit rate on random-target psi tests between the two individual bodies that are allegedly sharing the same mind. That'll get you another Nobel, and the James Randi prize, if you can do it. (Speaking here as someone who believes that ordinary low-level psi will turn out to have a materialistic explanation in the form of nonlocal transactions with brains at one or both ends.)

          • G

            Correction: "... in order for there to be _continuity_ between both minds..."

          • hypnosifl

            I see no reason why one would need to know the exact quantum state--the idea is to create a simulation which has access to all the same subjective information as the original and has the same type of behavior, our brains probably store the information we have subjective access to (memories, abilities etc.) in the structure of the synapses, not at the quantum level. Besides, the molecules of our brain are constantly undergoing random thermal motion without it apparently destroying subjective continuity. As a theoretical matter I would note that it is in principle possible to measure the exact quantum state of any system, though--see complete set of commuting observables.

            You seem to have completely the wrong idea of "forking" if you imagine I am suggesting there would be psychic communication or any sort of sharing of new experiences after the fork. The very notion of a fork was meant to presuppose that they have totally distinct subjective experiences after the fork, and thus a single mind would become two distinct minds that only had the same experiences and identity before the fork. And as I said, I am talking about a theory of subjective experience along the lines that Chalmers proposes, where it's assumed that all physical events can be explained in purely physical terms, so there would be no predictions about the physical behavior of uploads that would differ from those of a pure materialist (so no psi obviously). Any new "laws of consciousness" would deal only with how subjective experience is correlated with events in the physical world, including the question of whether an upload's stream of consciousness would start at the moment the simulation was booted or whether it would be continuous with the stream of consciousness caused by the functioning of the original biological brain before it was scanned.

          • G

            OK, then we agree that a fork would create two separate minds.

            I'm inclined to believe that a forked mind would believe itself to have backward-continuity of experience: something like "la la la... (zap!)... hey wait a minute, what am I doing in this box?, I had a human body just a few moments ago!" Meanwhile the original mind would either have been erased or would continue as before but with a memory gap during the procedure.

            This assumes that "the box" can duplicate (rather than simulate in software) all of the different computing architectures that exist in the biological brain.

            This is all still pure speculation, and any empirical test is probably at least a century or more away.

          • G

            Any such "fantastic device" is magic, not science: not only can't it be done now, it can't be done at all, within the laws of physics, due to measurement uncertainties.

            How fast do you think we will realistically be able to replace biological neurons with silicon devices? Keeping in mind that each neuron has on average 8,000 connexions to other neurons.

            One neuron per second, which is to say 8,000 connexions per second? Then, to replace a human brain which consists of 1 - 2 billion neurons, you'll spend from 32 to 64 years in hospital, with the treatment going on 24/7/365.

            During that time you'll have either an opening in your skull that will have to be maintained in surgically sterile condition, or you'll be connected to a "drip" source of whatever-it-is that will also have to find its way from your veins to your brain without clumping up and blocking the blood vessels in other organs (such as your lungs, liver, kidneys, etc.). It will also have to pass the body's immune system (thus also immunosuppressive drugs and a sterile room) and pass the blood/brain barrier.

            By the way, what do you expect will be the energy source for the silicon neuron-replacements?

        • G

          Looks like you & I are both having the same problem: trying to show people that a copy of themselves is not the original, and there is no magical route to eternal life via the computer god. When either of us figures out how to get the point across, we should let each other know.

      • SaintMarx

        No. This only reproduces a continuity of patterns, not a continuity of a specific consciousness.

  • Scott Ferguson

    “I am not the physical. Life is absolutely immortal.” -Buckminster Fuller

    • discordian

      Buckminster Fuller, 1895-1983

  • Micro

    I don't think the people that freeze their brain will be the same person if the body is revived. The consciousness/soul will go to the other side when you die (or for you that don't believe in the other side; it will just simply disappear). If your body "wakes" up again, it will not be you. It will be a machine. I'm not overly religious, but I know that there are spirits in our world (yes, ghosts), so there must be another side, If you think believing in ghosts is crazy, then I urge you to go and live in a haunted house. You'll quickly change your mind, and possibly shit your pants doing so.

    And about the virtual copies, I agree with the others here. The copy will be a copy. And it won't even be like you, because it will be a machine, a machine with your memories. Nothing more.

    • smoochie

      I don't think believing in ghosts is crazy; I think it's stupid.

  • Sid

    Robin Hanson, economist at GMU, has thought about the economic and social structure of the society of future human emulations in a computer (or Ems as he calls them). He has done this in surprising detail and it's the topic of a book he's in the process of writing.

    The main point he makes is that the doubling rate of the economy will be much quicker in an Em-dominated world. It could be on the order of weeks/months. Right now the doubling time is on the order of years/decades.

    • Agga

      It makes so much sense that we should strive to arrange our whole existence, indeed the very core of our beings and the world, to maximise economic growth. Economic growth is of course a purpose onto itself, and, ultimately, the meaning of life.

    • G

      Economic growthism on a finite planet is equivalent to saying that you can map an infinite plane onto the surface of a Euclidean solid.

      A doubling rate of weeks or months would burn through Earth's resources so quickly that the present climate crisis would become a moot point.

      However we can take comfort in the fact that the entire premise of Singularitology and Uploadism is "not even wrong" on numerous empirical and proper-science theoretical grounds. If people want to believe in it as a religion, that's their right. But it doesn't comport with science, regardless of the endorsements of Silicon Valley USA bigwigs.

      Aside from which, the advertising campaign for becoming an Em would be a non-starter. "Be Em!" Uh, no thanks.

  • Agga

    This is all very interesting, but I wonder if the author has heard of the embodied mind thesis. It seems to me that our minds are only in part our brains. There is interesting findings that suggest that our cognition and experience of the world is based so deeply in our experience of being in a body that to just take the body out of the equation is to erase the equation completely. And our consciousness, well, we don't even know what that is yet or how it works, so it seems rather premature to assume that we can do this sort of thing at all - let alone in a controlled way, or a way that could replace or even simulate actual life.

    As for this: "If your brain has been replaced by a few billion lines of code, perhaps eventually we will understand how to edit any destructive emotions right out of it. Or perhaps we should imagine an emotional system that is standard-issue, tuned and mainstreamed, such that the rest of your simulated mind can be grafted onto it. You lose the battle-scarred, broken emotional wiring you had as a biological agent and get a box-fresh set instead."

    Well that is the most chilling paragraph I have read in a long time. The view of life that is revealed here will give me nightmares. It suggests to me that to find this whole idea even acceptable, one must possess a pretty radically twisted idea of what a person is, what humans are, and what life is. A view that is very far from wise or healthy. It is reductionism in the most extreme, and a kind of solutionist, inanimate thinking that fails to see any layers except the very surface one.

    I bet most people would agree that a recording of a song falls very much short of replacing, or even approximating, the live performance. There is a great deal lost, and altered. Those who fail to see what is lost perhaps wouldn't ever miss the real deal. And it is likely that to such a person any attempt to explain will be futile. They don't experience it, so they cannot grasp it. The ineffable bloom, as E.M Forster called it. The thing is, I don't want those people to decide the future of humanity or our world.

  • guest j

    The computer version of you won't be you. Once your brain is dead your consciousness is gone. A computer program intended to be a replica of your brain won't change that.

  • Michael Graziano

    Thank you, commenters! I wanted to reply to some of the common themes emerging in the discussion.

    The most common reaction I am seeing here seems to be that a copy of you, no matter how close, is still not you. It doesn't count as you because it's only a copy. But I would have to disagree. Consider that the human body is, what, more than 90% water. All that water is constantly replaced. So are ions, lipids in cell membranes, and amino acids. Very little of the body has a long time constant. Every morning you wake up as a very close, but not precisely perfect, copy of the you from the past. At some point, if you understand even the basics of physiology, you have to arrive at the conclusion that what makes you, the essence of you, is the pattern, not the exact physical stuff. Copy the pattern, and you copy yourself. That thought makes people squeamish. That's understandable.

    A second common reaction seems to be that without a physical body, there would be no real basis for consciousness. But really, think about a dream. You walk around in a body that is simulated inside your brain. That body does things, jumps, runs, talks, all in simulation. Your real body is immobile. In dreams, your musculature is mostly paralyzed by a descending blockade of the spinal cord and you are running around in a simulated body. In a simulated world, you would also inhabit a simulated body.

    A third reaction seems to be, wow, that future sounds like hell. Well, yes, that is partly the point of my piece. But I would say, without apology for the bluntness, we better get used to it, because it's coming. All the technology and all the incentives go that direction.

    • Julian

      That the hardware is continuously changing does not mean that the software must stop. On the other hand, the copy and the original in your story can be viewed as two separate processes running the same program on different hardware and with different inputs. It's clear that they are two separate entities: there is no magical connection between the consciousness of the original and that of the copy.

      • SaintMarx

        The computer metaphor of hardware and software assumes that the mind is just a configuration of the brain. This is not proven, but simply an unwarranted presumption of mind-bran reductionism and materialism. Materialism may be true, but cannot just be assumed to be true.

    • Agga

      Like I said, the embodied cognition thesis suggests that indeed there would be no basis - or at least- a drastically differing basis, for consciousness without a body.

      Also, your analogy of the dream is not valid. A dream is not like real life. Plus, there are much more sophisticated systems than motor systems that are provided by the body. All sorts of sensory input, and more importantly - how we make sense of the input, and how we experience our being and self, are greatly rooted and even originates in the body. Emotions are felt in the body, and we don't even know yet to what extent they are also originated elsewhere than the brain.

      As for the water analogy, it isn't relevant, what makes the essence of you is not the pattern. If I clone you into two mini Michaels, and expose one of the clones to a caring environment, and the other to every form of torture or abuse imaginable, are you saying that they are the same? And further, that the abused twin shouldn't worry about its misery, because "his pattern" is out there somewhere, living the good life. I assure you it would matter little to the abused twin that "his essence" was elsewhere as well. It is the same thing; when you die and the copy comes online, you are still dead. Your essence is irrelevant - if it lives on or dies doesn't change that you won't be there to enjoy it.

      Unless you are suggesting that we know that there is no continuity already - that is, that it is proven that consciousness is replaced along with the aminos and lipids etc. If this were true - then you would be right (though the abused twin would get little comfort in knowing that his memories of abuse really happened to some previous, identical, render). But I haven't seen that data. I believe this is an assumption that builds on further assumptions on the nature of consciousness.

      • hypnosifl

        "Embodied cognition" just says that the brain's way of thinking is shaped by its sensory and motor feedback from the body, so if a simulated body provided exactly the same sort of feedback I don't see a conflict with the notion of embodied cognition. Or are you suggesting that embodied cognition says that the brain has some sort of direct connection to the body not mediated by physical signals like neural signals from the sensorimotor system and hormones and such, so that even if you reproduced all the right kinds of physical inputs, somehow the brain would "know" that a real physical body wasn't present and would behave abnormally?

        • drokhole

          I think the "embodied mind" that people like George Lakoff speak of is how the nature of the body (and bodily experiences) structures our thoughts and conceptual frameworks. But it also implies that we haven't the faintest idea of what consciousness even is outside the context of the body. That being said, we now know that there are over 100 million neurons in the GI tract that have a direct influence on mentation - leading some to call it the "second brain":

          Think Twice: How the Gut's "Second Brain" Influences Mood and Well-Being
          http://www.scientificamerican.com/article.cfm?id=gut-second-brain

          Further still, scientists are beginning to learn of the vast influence that gut microbes have on our conscious experience:

          Gut feelings: the future of psychiatry may be inside your stomach
          http://www.theverge.com/2013/8/21/4595712/gut-feelings-the-future-of-psychiatry-may-be-inside-your-stomach

          Does this simply amount to more things signalling the brain, or do they point to the fact that consciousness might be more distributed throughout the body with our experience seemingly focused behind the eyes? I don't have a clue. But I also like this quote from philosopher Alan Watts that takes a more generalized perspective:

          "In your body there is no boss. You could argue, for example, that the brain is a gadget evolved by the stomach, in order to serve the stomach for the purposes of getting food. Or you can argue that the stomach is a gadget evolved by the brain to feed it and keep it alive. Whose game is this? Is it the brain's game, or the stomach's game? They're mutual. The brain implies the stomach and the stomach implies the brain, and neither of them is the boss." - Alan Watts

    • Cjmc45321988

      Kurzweil makes this point as well. So long as you digitize yourself slowly, so that both sides have time to mix into one identity, you are maintaining the continuity of consciousness, therefore it is possible to make it to the other side. Just like the ions and membranes are slowly replaced, you slowly replace the brain with digital stuff.

      • SaintMarx

        This has not been proven. This is a classic problem of philosophy of mind. It's just as likely (if not more so) that this gradual digitization is actually just a gradually killing of the mind, replacing living consciousness with dead simulation. The gradual process does not resolve this concern.

      • discordian

        And what is it about the gradual nature of the process that preserves the existing thread of consciousness? "Gradual" after all is a relative term. Could you really say with any certainty that's it's still you if neurons are replaced by digital equivalents over the course of an hour as opposed to a minute? This all seems like unfounded speculation to me.

        • Cjmc45321988

          It may not be proven, but one thing that is for sure is that it is already happening in the brain. So as long as you do it no faster than is already being done, you can do no worse than what the brain is already doing to itself. You have nothing to lose.

          • G

            Do the math:

            85 billion neurons in a human brain.

            Replacement rate of N neurons per T unit of time.

            Example: Replacement of 10 neurons/second = 270 years.

            Nature takes years to grow new neurons.

            So, you were saying....?

      • G

        If you digitise yourself at the rate of ten neuron-replacements per second, it takes 270 years to replace all 85 billion neurons in the brain.

        However the human brain takes years to grow new neurons, so replacing it at "natural" speeds would require tens of thousands of years.

        I am coming to a point of quasi-Buddhist compassion for Kurzweil. He and his cohorts seem to be motivated by, at minimum, an inability to grasp the concept of death, or more likely, real terror at the prospect of their own deaths. They all need to start practicing various forms of meditation.

    • Agga

      It's good to see you in the comments. Your article was very thought provoking. Clearly you are visionary. As such, I hope you understand I am only one-third joking in the following:

      "that future sounds like hell. Well, yes, that is partly the point of my piece. But I would say, without apology for the bluntness, we better get used to it, because it's coming. "

      This is basically a declaration of war. It is taking away any choice that people might have how to live their lives. Some people, like you predict, will jump on this. Some people will never give up their birthright; to be alive and embodied in a physical world, as growing, learning living dying beings. If this is indeed an inevitable future progress, there will be war.

      I will preempt this whole conflict that I see playing out:

      The "Ems" will soon stop caring about the real world except as a place to go for resources. Those of us who will resist you, and continue our real lives as physical beings in a physical world, will insist on our sovereignty.

      And more importantly: our right, as physical beings, of precedence to those resources - indeed the entire physical world. We will want the earth, because you have rejected it. You will send your machine avatars to kill us. We wont let you.

      Lets preempt the bloodshed and... silicashed? crossed wires? deadly viruses? that will inevitably ensue by negotiating a treaty right here in the tread of this article.

      We will graciously give you enough resources to get off the planet. You can blast off into space and mine asteroids or whatever and build your group mind in peace. And leave the physical earth to the physicals. We need it and we want it.

      Good luck out there.

      • G

        Nicely said, though we physicals should also have precedence for space exploration and interplanetary & interstellar travel. Otherwise we only succeed in unleashing a Borg whilst condemning ourselves to extinction when the Sun explodes.

        Here's another pin to stick in the Church of Singularitology's balloon: If humans succeed in creating truly conscious machines, those machines will for all intents and purposes be persons with the inherent rights to their own existence. In that case, using them as vessels for upload, is the moral equivalent of making babies to use as sources of transplant parts.

        • Agga

          Good point. Perhaps the Ems should leave the galaxy. After all, millions of years of space journey shouldn't be an issue to beings that are independent of the physical world. We could allow them to mine some resources off of certain asteroids as they go, as long as they do not stop or turn back. We reserve the right of precedence for the stars and planets, and all things that are needed for physical life.

    • mijnheer

      Dr. Grazione, you say "Every morning you wake up as a very close, but not precisely perfect, copy of the you from the past." From this you conclude that there is a "you" that persists despite the copying. But I think a Buddhist would draw the opposite conclusion and say, "No, there is not and never was a 'you', in the sense of a self that persists over time. So, no, the copy will not be you -- it will be another consciousness that (wrongly) imagines it is a later version of an earlier self."

      What makes you (I use that word loosely, of course) think we are not living in a simulated reality right now? Do you have any good reason whatsoever for believing this is not a simulation? Perhaps you are familiar with the argument by Nick Bostrom that there is a high probability that we are currently living in a simulation.
      http://www.simulation-argument.com/

      On the topic of simulated reality, there's a nice short story by Robert Sheckley, "The Store of the Worlds" (1959). Here's an excellent reading of it. (The Sheckley story begins at 10:00 minutes of the podcast.)
      http://www.drabblecast.org/wp-content/uploads/2011/09/Drabblecast-188-The-Store-of-the-Worlds.mp3

    • smoochie

      I don't buy it, sir, and I think it's easy to illustrate: Say you jump the gun and make your virtual copy long before you die. Is it you? No, clearly your sense of self, your capacity to experience, would remain with you and your physical body, while your copy runs around in virtual reality, perhaps imagining itself to be you. But you yourself would go on experiencing your life just as before--there would never come a time when you suddenly found yourself experiencing your avatar's perceptions or thoughts. None of this would change with death, quite obviously.

      As to your wishful rationalization about gradual molecular replacement, it's just not the same thing at all.

    • SaintMarx

      Thank you for the comment. However, you argument simply presumes material reductionism without proof.

      Furthermore, this fails to address the obvious problem (mentioned by several commenters, myself included) that multiple copies can exist in addition to the original,which shows that one's consciousness does not continue in the copy. At best, one has created another person with implanted memories; more likely, one has created a zombie with no consciousness at all. There is no way to prove that this copy is conscious; even if it is, it is not the original person.

    • http://www.livinginthehereandnow.co.za/ beachcomber

      "It doesn't count as you because it's only a copy." Quite - refer The ship of Theseus, also known as Theseus's paradox, is a paradox that raises the question of whether an object which has had all its components replaced remains fundamentally the same object. Wiki. On a quantum level, however, this is impossible.

      A second common reaction seems to be that without a physical body, there would be no real basis for consciousness." Firstly, define consciousness, in a more detailed context other than the usual 'sensory awareness
      ' definition. Incorrect premise results in an incorrect conclusion.

      Secondly, the dream state is conditional on a combination of physical and emotional experiences and is certainly not immobile. Refer sleepwalking.

      Nice article but no cigar : )) Too many assumptions.

    • Tauri1

      Seems to me you're suggesting the same thing Buddhism states, that there is no permanent "self" and that 'consciousness" is a process, not a thing.

    • G

      The copy is still not you. Nature already does this in the form of monozygotic twins, and they do not experience shared consciousness. So your copy lives on and you die, and you're still dead.

      The natural changes in the human body over time (turnover of atoms, molecules, cells) are incremental and sufficiently slow as to not produce a change in state of identity. But if nothing else, "incrementalist" theories of upload fail on the amount of time required to replace bio-neurons with hypothetical tech-neurons: 85 billion neurons in a brain, replace 10 per second (and their 80,000 connections), and the process requires 270 years.

      Dreams still occur in your original biological brain.

      It is not coming, it is a dead-end, regardless of the investment of mega-fortunes by Silicon Valley elites who can't come to grips with the idea that some day they will die. So, no, we'd not "better get used to it," any more than we'd better get used to flying unicorns that poop gold coins in our gardens.

    • Michael Hanlon

      The nub of all this (aside from the small problem of not knowing what consciousness is) is the continuity-of-self issue. Graziano is quite right in saying that when you wake up in the morning a 'new you' is created when your conscious brain reboots. This happens even more profoundly when you recover from a deep coma. What is the difference between this and making a copy of the coma-patient's brain and firing it up? Neurons are just neurons, after all.
      But that is not how it feels. If I tell you I will upload your consciousness into a machine where you can live, part-time, dipping in and out of the real world at will, then that is one thing - a sort of glorified Second Life that does not appear to pose any more profound philosophical problems than going on an Acid-trip.
      But if I say, OK, we'll do that, but we'll have to kill you first (maybe during the brain-recording process) and then you are stuck in Second Life forever. Then what? If I am about to die, it is no consolation to know that in a few hours a new me will wake up, either in the real world or in some computer simulation, claiming to be me, with all my memories and so forth, This person, so far as I am concerned, will be an imposter. And I will still be dead.
      There is no way round this problem because the sleep/coma analogy seems to be 100% sound. When I fall asleep I do not despair and think I am about to die and some bogus version of me will hijack my body overnight. If I did I would go mad. Yet, maybe, that is in fact just what is happening and we should simply accept that we are alive only one day at a time.

  • bob

    Very Arthur C Clarke-esque. Like the author and several of the commenters, I do not wish to be a part of this hypothetical future world, yet few things give me more pleasure than contemplating its eventual existence.
    I have to say that I disagree with the commenters who believe that an exact copy of you, or even a simplified copy cannot be the same as the original. However, if you create 1,000 copies of yourself, then "you" may end up dying 1,000 times. But the very fact that "you" are stored in a database, ready to be "awoken" at any time, implies a certain sort of immortality.

    • smoochie

      Yes, but it wouldn't be you--you, the original you--experiencing it. If I clone you, others might not be able to tell the difference, and even your clone might think he's you. But you--your identity, your sense of self--would still be stuck in your body. That which is awoken would not be you, even if it is identical.

      • bob

        Yes, i agree to a certain extent. But if you were the clone, then you would be just as authentic as the original you were made from, no? Of course, from the moment the clone is created, he or she begins to diverge due to different experiences and perceptions. In that sense, you retain your identity and sense of self. But at the exact moment of creation, there would be no difference between the clone and the original. Your identity and your sense of self would also be cloned into the other. I think that the film "The Prestige" presents an interesting interpretation of this idea.

    • catlettuce redux

      Why would you prefer death?

      • bob

        I guess that I'm a bit of a romantic. I don't want to be so blatantly reduced to a pleasure machine.

      • G

        What you prefer has zero relationship to what is physically possible.

        I prefer to not be affected by gravity or inertia. OK, so...?

        We get to choose our opinions but not our facts.

  • Black Gordy

    A profoundly stupid article. No one has any idea of what consciousness is. It could be generated by a neural net, or maybe some completely new priciple is needed to understand it. I, for one, am pretty sure that a neural net with all the connections in my brain would not even be conscious.

    • Mind on the Future

      You maintain that "no one has any idea of what consciousness is," and yet you are "pretty sure that a neural net with all the connections in my brain would not even be conscious."

      As Mr. Spock would say, "Fascinating."

      • Black Gordy

        I see no contradiction. My first statement is true. My second is based on my experience of consciousness. You tell me how a neural net will become conscious. The author of this piece doesn't. I have no idea how it might. So, that makes me pretty sure. I could be wrong on that. I doubt it.

    • SaintMarx

      Not only stupid, but also dangerous. The fallacious possibility of equating simulations with real living beings will devalue life and give equal "rights" to mere machines. This is the real future dystopia.

      • G

        Ahh, but what if you & I are wrong? What if it does become possible to create machine minds that are truly conscious? In that case:

        1) Making a machine mind has the same moral implications as having a baby.

        2) Making machine minds to do our every task for us without complaining, has the same moral implications as slavery.

        3) Making machine minds as vessels for upload, has the same moral implications as making babies as sources of transplant parts.

  • Nick Weiler

    If my consciousness is a jazz solo, the only thing a perfect electronic record of my consciousness would approximate is the ultimate in biography, not immortality. In the record, there's no improvisation anymore, which is what really made the original experience unique. In the case of the brain, the wetware IS the experience, not just a tool or vehicle for creating that experience.

    So unless we can actually reverse engineer the whole brain (which I don't think is what Graziano is talking about here) – not just a snapshot of the connections that exist at one moment in time but also the patterns of infinite dynamic changes that the brain undergoes every millisecond of its existence – then I don't see how this imaginary technology could possibly simulate the experience of actually living.

    • hypnosifl

      A simulation is not the same as a recording. For example, a detailed simulation of the dynamics of the Earth's atmosphere can show weather patterns that look qualitatively just like the kinds of patterns we typically see in the real atmosphere, but aren't identical to the recorded real-world patterns on any specific day.

      • Nick Weiler

        Very true – but at the top, Graziano makes the comparison between the development of technology to record music and the current optimism about developing technology to record brain wiring.

        My concern is that many treatments of the growing technology of brain mapping miss a very important step in thinking that reverse-engineering the hardware will give us the software for free (or at least in assuming that the hardware is the harder problem). Even if we have the wiring diagram, we still need an operating system for the thing.

        So yeah, in addition to developing technology to accurately record the music, we also need to learn how to make new music keep playing, which requires a lot of additional knowledge and skill with music. In the realm of neuroscience, that achievement still seems a long way off.

  • thejmazz

    Very interesting but ultimately downright frightening. It seems that once we move down this path, we run the risk of losing all of what we perceive humanity, existence, intelligence to mean. This may not necessarily be a bad thing, we can unleash the full and brilliant capacity of the imagination unfiltered and unrestricted.

    Also, I think the "you" in a computer would experience "time" (supposing time has been implemented into the simulation!) at a significantly different rate, on multiple orders of magnitude. (I believe brainwaves register in the Hz area, whereas personal computers can run at 4 GHz!) The reason flys are so good at avoiding getting swatted is because their brains process some information faster than us..so the fly being swatted is synonymous to the slow motion bullet time scene in the matrix.

  • http://thewayitis.info/ Derek Roche

    I agree with the author on one thing: whatever can be done will be done and there'll be no shortage of volunteers but I can't get past the thought that whoever controls the power supply controls the virtual world and all that dwell therein. Shut down the power supply and it's Goodnight Irene, or whatever you call your virtual clone.

    The virtual world, in other words, could only ever be a subset of the real world and a massively distorted one at that. If life in the real world is ultimately a Darwinian struggle to survive and reproduce, what would be the equivalent in the virtual world? Clones begetting clones and attempting to monopolise data storage in the cloud? Clonal tribes launching massive cyber attacks on each other? Iterative games of Prisoners' Dilemma? However you think about it, virtual clones will still be prisoners of the real world and we will hold the key.

  • TonyTheTiger

    Maybe we're already there.

    • smoochie

      I get the feeling I'd be better looking and would have to go to the bathroom as much.

  • philip andrew

    You can't be conscious inside the computer, simply put, its only 1's and 0's flipping on and off, any pattern of that is not consciousness to me.

  • MandoZink

    And that simulated brain may truly be self aware, feel alive and experience a unique individual identity. Unfortunately now you're dead and this this is alive, thinking mostly like you once did if programmed right. Do you think your current consciousness is going to jump into that "other" you.

    Your individual consciousness could not even transfer into a perfect clone of yourself. It's a duplicate at best. It's it's own entity. You're dead.

  • Joe Gelman

    A strange possibility of this new virtual world would be its fragility. Imagine if, after a lengthy conversion process, every single human being on earth inhabited this virtual world. Can one imagine the future of all mankind culminating in a gigantic warehouse of humming servers? What if one of these servers broke down? Who would fix it? Would there be a group of programmer-martyrs that chose to deny themselves immortality and pledge themselves to the noble duty of protecting all humanity? If this is the case, the fate of all mankind would rest in their hands. Is this at all desirable?

    • http://www.livinginthehereandnow.co.za/ beachcomber

      Yeah ... I was also wondering about this. Perhaps by then the robots will do all the work .... Philip K Dick had some ideas about it in his wonderful sci-fi book "Do Androids Dream of Electric Sheep?"

  • David

    As has been said. I'm sure Michael Graziano is a clever chap, but creating a copy is creating a copy. The original still exists, it is not destroyed. The process the article focuses on is the copy process, not the move process. So an individual still exists in a mortal state and a copy is created in an immortal electronic state. Unless there is some kind of link between the two these are two separate entities.

    While the idea of living a life of limitless leisure may seem appealing, this environment is limited to the code that wrote it and external hardware running it. Senses of smell, touch & taste will not be fully replicated thus any seance of living will be numbed. The electronic self will envy the mortal self and desire an escape from this virtual prison.

    Add the ability to upload this electronic self into a mobile machine and then you start to see these electronic prisoners really starting to take life, combined with the electronic awareness of many others we then start seeing similarities to scifi like transformers & the matrix.

    If this technology is ever developed, it may well be likely that the machines will outlive the human race.

    • catlettuce redux

      Yes, but since the copy is the original - that's the nature ot digital - it doesn't matter. The digital you wil live and the flesh you will die, sooner or later. The choice isn't difficult. And I don't think you can presume that senses will be insufficient - that's an analog view.

      • David

        The copy is not the original. That is the point so many are making here.

        When I take a photo with a digital camera the original is not 'sucked up' into camera leaving nothing in it's place. The original remains. The photograph is a digital copy.

        If we were talking about technology that takes matter and dissolves it to reproduce it somewhere else (teleportation technology) then I'm sure many readers my re-consider the idea of copy & paste or move & paste and question where the consciousness goes.

        • Tauri1

          "The photograph is a digital copy." And a distorted one at that, which is why artists like Pisarro and Cezanne and many others, went out into the real world to paint and capture Nature because the eye "sees" differently than the camera.

      • SaintMarx

        Even in digital, the copy is NOT the original. It's a copy.

      • SaintMarx

        ..furthermore, there is no evidence that the "copy" under discussion is actually a copy of a mind, or just a copy of the externally observable parameters of a mind, that is, a mere external simulation, that may convince external observers, but has no consciousness.

      • G

        Brains and minds aren't "digital" any more than they are steam engines, telegraphs, or telephone exchanges.

        Don't confuse the thing with the metaphor.

        Look, I'm sorry to be playing atheist to the computer-god religion, but the empirical facts support a theory of neural computation and a theory of mind that rule out digital immortality. When you die, either you have something like an afterlife, or you cease to exist. You'll have to make peace with that one way or another.

    • G

      Daleks.

  • Joe Campbell

    Logic 101: the author has done something fallacious.

    Creates a straw man i.e. "at one time people thought you had to replicate an entire instrument to reproduce it's sound". Instrument = brain, consciousness = sound.

    Except it does not. You can record a sound, but up can't generate new sounds with the recording. The recording does not become a new instrument capable of being played in a way that produces the tonal range of the original instrument.

    A recording of the "sound wave of consciousness" will be just that: a record. Not an instrument capable of playing new notes.

    MIDI was an attempt at creating a library of sounds and then executing a program that referenced these prerecorded sounds, much in the way a composer directs a symphony. But anyone who's heard a high polyphony reproduction of a piece of music recognizes that the MIDI version is lacking over the recorded-with-real-instruments version.

    Flawed premise then the rest of the article is pure speculation.

    Yawn.

    • SaintMarx

      Even if a perfectly executed MIDI simulation fools the listener (and I attest that they often do), this does not obviate the point that the simulation is still not the reality. Consciousness and its simulation are radical differences of kind; we equate the two at great moral, philosophical, and practical risk.

  • Ed Aaron Goering

    One thing that wasn't addressed. What about when these consciousness's could be returned to physical bodies? Pop a two hundred year old version of my mind into a 20 year old clone, genetically enhanced to my specifications?

  • SaintMarx

    The entire premise here is fatally flawed.

    The simulation provides no immortality for the original person, but only immortality for a simulation.

    Also: the simulation of life is not life at all. There is certainly no evidence that the simulation will be conscious. Even if it were, that "duplicate" is not the original person. The simple proof of this is that such a duplicate - in fact, multiple such duplicates - could then exist at the same time as the original person.

  • Tamas Kalman

    amazing journey. thank you.

  • teoreticom

    I thought about this scenario many times and although I understand that "you" are a slightly different "you" tomorrow, the transition to the tomorrow "you" doesn't seem to me similar to copy/cut - paste your entire brain information to a computer.

    The only possible solution I see is to implant digital neurons in your brain that will behave like new neurons that can store information and build connection. Eventually replacing all your brain with these digital neurons.

    But then there is another problem: you are now digital and is no other copy of you (which is good thanks god:) but digital goes from different rules than biological. You can make multiple copies of you instantly, you can erase parts of you, you can visit multiple places in few seconds, copy wikipedia articles in your data maybe, and so much more. The phase the information that created you is not playing by the same rules in digital.

    There is also the problem of reality. What is real in this virtual world ? Even if we simulate the real world atom by atom, the human knowledge is not only about that, is about understanding, interaction, experience. Think about how the notion of the atom was thought thousands of years ago and evolved into what we know today. In the meantime we had to develop mathematics and chemistry to arrive at that. How would that work in a world where the information is copy-paste ?

    It really is a fascinating idea and I think if it happens we wont feel the transition. We are already a huge online entity. Parts of us are forever taken into the online world where they live and interact with each other. Maybe that's the transition we are talking about and is so slow we didn't noticed is already happening.....

    • G

      A human brain has 85 billion biological neurons. How quickly do you think you can replace them with artificial ones? As in, how many neurons replaced per second?

      Once you pick a number for that, do the arithmetic.

      Example: at 10 neurons per second, replacing the entire brain takes 270 years.

  • John Smith

    Even if this could be made to work, I don't think it's going to much fun. Today, as biological beings, our feelings pleasures, pains and emotions are related to the physical world. A SIM mind would soon realize (if it had the intelligence to work it out) that all its feelings were fake and therefore of no worth. Try giving your girlfriend plastic flowers: from her reaction you will learn the difference between simulation and reality.
    Indeed, a SIM mind is much like a biological mind on narcotics: it feels pleasure, but the trip becomes a nightmare.

  • http://aboutlifting.com/ Ironthumb

    Was this the basis of the New superman's father where El appears as a simulation years after he died? And communicating to people as if he is alive

  • Eli Sennesh

    So how come everyone's going to end up in a hive-mind if nobody wants to be in a hive-mind?

  • feathers632

    I'm just wondering why people assume it hasn't happened already?

  • donwilhelm3

    "Your connectome, simulated in a computer, would recreate your conscious mind". Maybe not. Darwin pointed out that mammal brains are fundamentally the same, which argues that the single brain algorithm is unconscious memories of our muscular reactions to the culture/environment, and not memories of the perceptions themselves. Thus, all knowledge is in the culture/environment, and not in our heads. There is no ability for representation or computation in our heads, either.
    The connectome is thus determined by the environment, the culture, the family. The complete connectome contains no knowledge, or memories of situations, but only a record of our muscular actions in the culture/environment.
    Lakoff and Johnson pointed out that the metaphorical nature of language means that words are not literal: a verbal entity does not actually exist.
    Consciousness is the perceptual leg of the brain algorithm, and is aware of either our own motor or vocal action results, or the cultural; response to our actions. Without this muscular interplay between individuals and/or culture, there are no muscular action results to perceive, so no consciousness.

  • kemalgencay

    Maybe the book "Intelligent Design" at http://www.rael.org will take the idea one step further

  • vingband

    Will there ever be a "phonogram" of my mind that would accurately reproduce the very answer I'm writing in this very moment? Possibly. Likely, even, since I'm training it right now. Or maybe I already did, and this answer is the result. In which case I'm definitely me, and certainly not that other guy. Whoever he was. But identity is a moot point, we already fell off the cliff. Right?

    Recording music is an obvious analogy, but also fraught with wacky problems, as Mr. Graziano is certainly aware of. A recording (a scan) of a concert will capture the sound waves just fine, but it won't capture the experience of neither the musicians nor the audience. And that, the mood, is what a concert is all about. On the other hand, in these days a lot of music is not "recorded", it is made with the media in mind. It is made for the detached, delayed experience. It's a completely different proposal.

    I'm already jacked in and feeding the simulator of my future self. I'm doing it now. But I won't be free until I'm detached from my present self.

  • aelena74

    the worst part about this is that you will get obnoxious fucks like your average dictator and plutocrat to stay around forever.

  • Russell

    What a daft suggestion. The author needs to a) forget about Descartes and his idiotic, computational, dualistic legacy, and b) spend some time authentically existing as a human organism in an ecosystem. Then downloading his existence into a glorified toaster will seem as patently absurd as it ought to.

  • Dustin Parsons

    Here's how to get around the, "But it won't REALLY be you because you'll die during the process of copying." or "You will just make a digital copy of yourself but the real you will continue to exist in meat-space." arguments.

    Consider replacing parts of your physical organic body with mechanical/digital versions of human body parts - one organ or limb at a time. Replace an eye with a super-hi-def camera. Replace a leg with a carbon-fiber mechanical leg. Replace 5% of your brain with a computer chip that recreates that portion of your brain. Then 2 years later replace another 5%... and continue until you've gone from human to cyborg to robot. The transformation would be a slow gradual process not much different from growing from an infant to a senior. Once your entire brain has been digitized you can then upload an exact copy of yourself to the cloud without any information loss and you will still be "you" throughout the entire process and after.

    • G

      How many neurons do you think you can replace per minute?

      Pick a number first, and then do the arithmetic:

      Divide the 85 billion neurons in the brain by that number to get the length of time needed to replace all of them.

      Surprise!

  • Ben

    "The Quantum Thief" by Hannu Rajaneimi extrapolates this idea to a future of mind harvesting, entity engineering and composite consiousnesses. Definitely worth a read if you enjoy thinking about this kind of stuff.

  • Laubzega

    All of the issues raised in the article (and many more) have been thoroughly investigated in "Permutation City", a 1994 novel by Greg Egan. http://en.wikipedia.org/wiki/Permutation_city

  • http://aboutlifting.com/ Ironthumb

    We have included this one in our latest Testosterone linkages, Happy new year!

  • Waterbergs

    Great article with some lovely ideas, but I'm not sure I share the writers assurance that mimicing the connectivity of the human brain is inevitable. For example consider a single neuron with 100 connection to any of a possible 1000 other neurons (probably an underestimate of the level of complexity of many neurons). The number of possible combinations is larger than 10 ^ 100 ! This is way more than the total number of atoms in the universe. And we have about 10 ^ 11 neurons. This gives a number of combinations of ( 10 ^ 100 ) ^ (10 ^ 11) - an unimmaginably larger number.

    So basically no one to one model of the connectivity of the human brain. Not now, not ever.

  • John G Messerly

    The point is not to "copy" the brain but to upload it into an AI or virtual reality. That is how you attain immortality. If you made multiple copies of your brain you are correct they could have different experiences, but this is not relevant. I blog about these issues and connect them with meaning of life questions in my most recent book and at my blog: reasonandmeaning.com

  • ApathyNihilism

    How pointless. They are not even copies, but just simulations. They are no more alive than a toaster.

  • George

    I have heard people say that the technology will never catch on. People won’t be tempted because a duplicate of you, no matter how realistic, is still not you. But I doubt that such existential concerns will have much of an impact once the technology arrives. You already wake up every day as a marvellous copy of a previous you, and nobody has paralysing metaphysical concerns about that.

    You are a bit dismissive of those 'existential concerns' I think! A duplicate is not you, and a submitter to the process might balk at the part where the original gets 'deleted'. ;-)

    You raise a good point about waking daily, and this experience might be at the heart of it. We may be confusing ourselves and the content of our experience. When I awake, all is blank, but gradually thoughts and sounds and then a picture and then the room fade into existence. Correspondingly (with practice), when I fall asleep the senses fade and I am left experiencing myself as an open awareness before, in time, images appear and mutate and coalesce and form into dreams.

    It seems that I am "the thing that is aware of things" rather than the things themselves.

    So before we go 'uploading our brains' we need to work out something important: What is just the 'content' and what is the experiencer? If we are in fact just uploading the content, the pattern of experience, to another experiencer, then "I", the original experiencer, am not getting to continue at all. I'm still out here, about to die!