Creative blocks

The very laws of physics imply that artificial intelligence must be possible. What's holding us up?

by 6000 6,000 words
  • Read later or Kindle
    • KindleKindle
David Deutsch on artificial intelligence. Illustration by Sam Green

'Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough.' Illustration by Sam Green

David Deutsch is a physicist at the University of Oxford and a fellow of the Royal Society. His latest book is The Beginning of Infinity.

It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

Why? Because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation.

Babbage came upon universality from an unpromising direction. He had been much exercised by the fact that tables of mathematical functions (such as logarithms and cosines) contained mistakes. At the time they were compiled by armies of clerks, known as ‘computers’, which is the origin of the word. Being human, the computers were fallible. There were elaborate systems of error correction, but even proofreading for typographical errors was a nightmare. Such errors were not merely inconvenient and expensive: they could cost lives. For instance, the tables were extensively used in navigation. So, Babbage designed a mechanical calculator, which he called the Difference Engine. It would be programmed by initialising certain cogs. The mechanism would drive a printer, in order to automate the production of the tables. That would bring the error rate down to negligible levels, to the eternal benefit of humankind.

Unfortunately, Babbage’s project-management skills were so poor that despite spending vast amounts of his own and the British government’s money, he never managed to get the machine built. Yet his design was sound, and has since been implemented by a team led by the engineer Doron Swade at the Science Museum in London.

A detail from the replica of Charles Babbage's Difference Engine on display at the Science Museum, London. Courtesy Science Museum Slow but steady: a detail from Charles Babbage's Difference Engine, assembled nearly 170 years after it was designed. Courtesy Science Museum

Here was a cognitive task that only humans had been able to perform. Nothing else in the known universe even came close to matching them, but the Difference Engine would perform better than the best humans. And therefore, even at that faltering, embryonic stage of the history of automated computation — before Babbage had considered anything like AGI — we can see the seeds of a philosophical puzzle that is controversial to this day: what exactly is the difference between what the human ‘computers’ were doing and what the Difference Engine could do? What type of cognitive task, if any, could either type of entity perform that the other could not in principle perform too?

One immediate difference between them was that the sequence of elementary steps (of counting, adding, multiplying by 10, and so on) that the Difference Engine used to compute a given function did not mirror those of the human ‘computers’. That is to say, they used different algorithms. In itself, that is not a fundamental difference: the Difference Engine could have been modified with additional gears and levers to mimic the humans’ algorithm exactly. Yet that would have achieved nothing except an increase in the error rate, due to increased numbers of glitches in the more complex machinery. Similarly, the humans, given different instructions but no hardware changes, would have been capable of emulating every detail of the Difference Engine’s method — and doing so would have been just as perverse. It would not have copied the Engine’s main advantage, its accuracy, which was due to hardware not software. It would only have made an arduous, boring task even more arduous and boring, which would have made errors more likely, not less.

Babbage knew that it could be programmed to do algebra, play chess, compose music, process images and so on

For humans, that difference in outcomes — the different error rate — would have been caused by the fact that computing exactly the same table with two different algorithms felt different. But it would not have felt different to the Difference Engine. It had no feelings. Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans. Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general. In fact, its repertoire was confined to evaluating a tiny class of specialised mathematical functions (basically, power series in a single variable).

Thinking about how he could enlarge that repertoire, Babbage first realised that the programming phase of the Engine’s operation could itself be automated: the initial settings of the cogs could be encoded on punched cards. And then he had an epoch-making insight. The Engine could be adapted to punch new cards and store them for its own later use, making what we today call a computer memory. If it could run for long enough — powered, as he envisaged, by a steam engine — and had an unlimited supply of blank cards, its repertoire would jump from that tiny class of mathematical functions to the set of all computations that can possibly be performed by any physical object. That’s universality.

Babbage called this improved machine the Analytical Engine. He and Lovelace understood that its universality would give it revolutionary potential to improve almost every scientific endeavour and manufacturing process, as well as everyday life. They showed remarkable foresight about specific applications. They knew that it could be programmed to do algebra, play chess, compose music, process images and so on. Unlike the Difference Engine, it could be programmed to use exactly the same method as humans used to make those tables. And prove that the two methods must give the same answers, and do the same error-checking and proofreading (using, say, optical character recognition) as well.

But could the Analytical Engine feel the same boredom? Could it feel anything? Could it want to better the lot of humankind (or of Analytical Enginekind)? Could it disagree with its programmer about its programming? Here is where Babbage and Lovelace’s insight failed them. They thought that some cognitive functions of the human brain were beyond the reach of computational universality. As Lovelace wrote, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’

And yet ‘originating things’, ‘following analysis’, and ‘anticipating analytical relations and truths’ are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviours obey the laws of physics. So it follows inexorably from universality that, with the right program, an Analytical Engine would undergo them too, atom by atom and step by step. True, the atoms in the brain would be emulated by metal cogs and levers rather than organic material — but in the present context, inferring anything substantive from that distinction would be rank racism.

Despite their best efforts, Babbage and Lovelace failed almost entirely to convey their enthusiasm about the Analytical Engine to others. In one of the great might-have-beens of history, the idea of a universal computer languished on the back burner of human thought. There it remained until the 20th century, when Alan Turing arrived with a spectacular series of intellectual tours de force, laying the foundations of the classical theory of computation, establishing the limits of computability, participating in the building of the first universal classical computer and, by helping to crack the Enigma code, contributing to the Allied victory in the Second World War.

Turing fully understood universality. In his 1950 paper ‘Computing Machinery and Intelligence’, he used it to sweep away what he called ‘Lady Lovelace’s objection’, and every other objection both reasonable and unreasonable. He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular.

What is needed is nothing less than a breakthrough in philosophy, a theory that explains how brains create explanations

But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.

Why? I call the core functionality in question creativity: the ability to produce new explanations. For example, suppose that you want someone to write you a computer program to convert temperature measurements from Centigrade to Fahrenheit. Even the Difference Engine could have been programmed to do that. A universal computer like the Analytical Engine could achieve it in many more ways. To specify the functionality to the programmer, you might, for instance, provide a long list of all inputs that you might ever want to give it (say, all numbers from -89.2 to +57.8 in increments of 0.1) with the corresponding correct outputs, so that the program could work by looking up the answer in the list on each occasion. Alternatively, you might state an algorithm, such as ‘divide by five, multiply by nine, add 32 and round to the nearest 10th’. The point is that, however the program worked, you would consider it to meet your specification — to be a bona fide temperature converter — if, and only if, it always correctly converted whatever temperature you gave it, within the stated range.

Now imagine that you require a program with a more ambitious functionality: to address some outstanding problem in theoretical physics — say the nature of Dark Matter — with a new explanation that is plausible and rigorous enough to meet the criteria for publication in an academic journal.

Such a program would presumably be an AGI (and then some). But how would you specify its task to computer programmers? Never mind that it’s more complicated than temperature conversion: there’s a much more fundamental difficulty. Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

I'm sorry Dave, I'm afraid I can't do that: HAL, the computer intelligence from Stanley Kubrick's 2001: A Space Odyssey. Courtesy MGM

Traditionally, discussions of AGI have evaded that issue by imagining only a test of the program, not its specification — the traditional test having been proposed by Turing himself. It was that (human) judges be unable to detect whether the program is human or not, when interacting with it via some purely textual medium so that only its cognitive abilities would affect the outcome. But that test, being purely behavioural, gives no clue for how to meet the criterion. Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. (For how I think biological evolution gave us the ability in the first place, see my book The Beginning of Infinity.)

And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs.

The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.

Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile. Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible. I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once.

How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given the explanation, those drastic ‘changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

So, why is it still conventional wisdom that we get our theories by induction? For some reason, beyond the scope of this article, conventional wisdom adheres to a trope called the ‘problem of induction’, which asks: ‘How and why can induction nevertheless somehow be done, yielding justified true beliefs after all, despite being impossible and invalid respectively?’ Thanks to this trope, every disproof (such as that by Popper and David Miller back in 1988), rather than ending inductivism, simply causes the mainstream to marvel in even greater awe at the depth of the great ‘problem of induction’.

In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously framing it as the ‘problem of induction’, and making that problem look easy, because it casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.

Now, the truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data. And therefore, attempts to work towards creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there.

Present-day software developers could straightforwardly program a computer to have ‘self-awareness’ if they wanted to. But it is a fairly useless ability

Currently one of the most influential versions of the ‘induction’ approach to AGI (and to the philosophy of science) is Bayesianism, unfairly named after the 18th-century mathematician Thomas Bayes, who was quite innocent of the mistake. The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI. It is ironic that mainstream psychology has largely renounced behaviourism, which has been recognised as both inadequate and inhuman, while computer science, thanks to philosophical misconceptions such as inductivism, still intends to manufacture human-type cognition on essentially behaviourist lines.

Furthermore, despite the above-mentioned enormous variety of things that we create explanations about, our core method of doing so, namely Popperian conjecture and criticism, has a single, unified, logic. Hence the term ‘general’ in AGI. A computer program either has that yet-to-be-fully-understood logic, in which case it can perform human-type thinking about anything, including its own thinking and how to improve it, or it doesn’t, in which case it is in no sense an AGI. Consequently, another hopeless approach to AGI is to start from existing knowledge of how to program specific tasks — such as playing chess, performing statistical analysis or searching databases — and then to try to improve those programs in the hope that this will somehow generate AGI as a side effect, as happened to Skynet in the Terminator films.

Nowadays, an accelerating stream of marvellous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvellous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do.

The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviourist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up. Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.

An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.

In 1950, Turing expected that by the year 2000, ‘one will be able to speak of machines thinking without expecting to be contradicted.’ In 1968, Arthur C. Clarke expected it by 2001. Yet today in 2012 no one is any better at programming an AGI than Turing himself would have been.

This does not surprise people in the first camp, the dwindling band of opponents of the very possibility of AGI. But for the people in the other camp (the AGI-is-imminent one) such a history of failure cries out to be explained — or, at least, to be rationalised away. And indeed, unfazed by the fact that they could never induce such rationalisations from experience as they expect their AGIs to do, they have thought of many.

The very term ‘AGI’ is an example of one. The field used to be called ‘AI’ — artificial intelligence. But ‘AI’ was gradually appropriated to describe all sorts of unrelated computer programs such as game players, search engines and chatbots, until the G for ‘general’ was added to make it possible to refer to the real thing again, but now with the implication that an AGI is just a smarter species of chatbot.

Another class of rationalisations runs along the general lines of: AGI isn’t that great anyway; existing software is already as smart or smarter, but in a non-human way, and we are too vain or too culturally biased to give it due credit. This gets some traction because it invokes the persistently popular irrationality of cultural relativism, and also the related trope that: ‘We humans pride ourselves on being the paragon of animals, but that pride is misplaced because they, too, have language, tools …

… And self-awareness.’

Remember the significance attributed to Skynet’s becoming ‘self-aware’? That’s just another philosophical misconception, sufficient in itself to block any viable approach to AGI. The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

Perhaps the reason that self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Kurt Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. So has consciousness. And here we have the problem of ambiguous terminology again: the term ‘consciousness’ has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (‘qualia’), which is intimately connected with the problem of AGI. At the other, ‘consciousness’ is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.

AGIs will indeed be capable of self-awareness — but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. This does not mean that apes who pass the mirror test have any hint of the attributes of ‘general intelligence’ of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.

Ironically, that group of rationalisations (AGI has already been done/is trivial/ exists in apes/is a cultural conceit) are mirror images of arguments that originated in the AGI-is-impossible camp. For every argument of the form ‘You can’t do AGI because you’ll never be able to program the human soul, because it’s supernatural’, the AGI-is-easy camp has the rationalisation, ‘If you think that human cognition is qualitatively different from that of apes, you must believe in a supernatural soul.’

‘Anything we don’t yet know how to program is called human intelligence,’ is another such rationalisation. It is the mirror image of the argument advanced by the philosopher John Searle (from the ‘impossible’ camp), who has pointed out that before computers existed, steam engines and later telegraph systems were used as metaphors for how the human mind must work. Searle argues that the hope for AGI rests on a similarly insubstantial metaphor, namely that the mind is ‘essentially’ a computer program. But that’s not a metaphor: the universality of computation follows from the known laws of physics.

Some, such as the mathematician Roger Penrose, have suggested that the brain uses quantum computation, or even hyper-quantum computation relying on as-yet-unknown physics beyond quantum theory, and that this explains the failure to create AGI on existing computers. To explain why I, and most researchers in the quantum theory of computation, disagree that this is a plausible source of the human brain’s unique functionality is beyond the scope of this essay. (If you want to know more, read Litt et al’s 2006 paper ‘Is the Brain a Quantum Computer?’, published in the journal Cognitive Science.)

That AGIs are people has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI. Using non-cognitive attributes (such as percentage carbon content) to define personhood would, again, be racist. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.

Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

The battle between good and evil ideas is as old as our species and will go on regardless of the hardware on which it is running

For example, the mere fact that it is not the computer but the running program that is a person, raises unsolved philosophical problems that will become practical, political controversies as soon as AGIs exist. Once an AGI program is running in a computer, to deprive it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (ie before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote?

Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word ‘programming’. To treat AGIs like any other computer programs would constitute brainwashing, slavery, and tyranny. And cruelty to children, too, for ‘programming’ an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. To ignore the rights and personhood of AGIs would not only be the epitome of evil, but also a recipe for disaster: creative beings cannot be enslaved forever.

Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.

These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.

One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

I do not highlight all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that is essential to their future integration is also a prerequisite for developing them in the first place.

The lack of progress in AGI is due to a severe logjam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.

Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.

Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.

Comments

  • http://www.facebook.com/bruce.ingraham.71 Bruce Ingraham

    The philosophical breakthrough that Prof.
    Deutsche is looking for was made by the American philosopher Charles Saunders
    Peirce in the middle of the 19th century. What Peirce describes as ‘abductive reasoning’
    provides a more useful approach to probablistic reasoning than Bayes, at least
    in the context of epistemolgy. However,
    since Peirce’s concept of abduction is rooted in his infinitely regressive
    semiotics, it might not at first glance appear very help in the quest for
    AGI. Still, since most modern physics,
    both cosmological and quantum, seems content to accept infinite regress within
    its explanatory models, I don’t see a problem, other than complexity, in this
    context. It does, however, mean that one
    has to place semiotics in the centre of one’s epistemology.

    • http://www.facebook.com/richard.lancashire Richard Lancashire

      To create explanations, one needs to manipulate symbols; it doesn't seem unreasonable to take semiotics very seriously. Language seems pretty essential, but there are still the philosophical problems of meaning and understanding to get past.
      A computerised "assignation" of a word to a manipulable symbol won't cut it, clearly, but I don't think anyone really thinks that the best, most context-sensitive chatbots we have now actually *mean* anything with their output. What sort of language would we expect from a computer-based form of life? "If a lion could talk, we would not understand him".
      Excellent, thought-provoking article.

  • http://twitter.com/MarkTindal Mark Tindal

    A fascinating and insightful article, thanks. Attempts to reproduce human general intelligence have always appeared irrelevant to me but perhaps producing technology and creating solutions with the AI badge is indeed misleading. We don't need more humans, we need to extract the functions we need from human intelligence and work on those individually and apply them to the relevant industry. Sadly, that doesn't quite have the same ring to it!

    @marktindal
    http://www.ai-applied.com

  • http://www.scribd.com/Penucquem/info Ronald Thomas West

    The arrogance of western science/mentality simply astounds

    http://www.scribd.com/doc/106899087/You-ve-Got-Apes

    Even as quantum mechanics demonstrates Plato's objectivity wrong, even as science destroys our planet through applied technology, somehow people [like the author of this article] believe in these fantasy futuristic constructs in a culture in a race with itself to extinction.. 'you've got apes'

    • anonymous

      Please don't speak of quantum mechanics until you have understood it properly.

      • http://www.scribd.com/Penucquem/info Ronald Thomas West

        The theoretical physicist Bernard d'Espagnat states:

        "The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment"

        I doubt it could be made more clear. Perhaps you should not speak until you can grasp the significance of d'Espagnat's statement

        • Richard Fine

          That is an assertion, not an argument (and that it is an assertion made by Bernard D'Espagnat makes it no more true than if it were made by David Deutsch). The source is this article by d'Espagnat in Scientific American:

          http://www.scientificamerican.com/media/pdf/197911_0158.pdf

          I've not read very far, but in the fourth paragraph it looks like d'Espagnat makes a very serious error: as his second premise, he accepts the validity of inductive inference.

          As Karl Popper described in his work, and as Deutsch mentions in his article, this is not a premise we should accept. So unless you can explain how d'Espagnat's acceptance of inductive inference is actually irrelevant to his point, I'm going to assume that his conclusion can be ignored.

          • Richard Fine

            Reading a bit more, it looks like he also adopts the Copenhagen interpretation of quantum mechanics, rather than the many-worlds interpretation, as seen in passages such as:

            "A singular feature of quantum me­chanics is that its predictions generally give only the probability of an event, not a deterministic statement that the event will happen or that it will not."

            Adopting Copenhagen, with its massive anthropic problems, and carrying that forward to conclude an anthropic view of reality, is not surprising.

            But Everett's many-worlds interpretation of quantum mechanics does not have this problem. The probabilities yielded by quantum mechanics describe the fraction of worlds in which an event will happen, versus the fraction in which it will not. As observers, we'll be present in all those worlds, but by the subjective experience each per-world instance of us will have, we won't know whether the event happened or not until we check.

          • http://www.scribd.com/Sally%20Morem Sally Morem

            Our everyday world of macroscopic objects, usable energy systems, and thinking brains ride atop quantum mechanics. This emergent system permits natural intelligence to exist at higher levels, not at the quanta. Therefore, it's perfectly possible for artificial intelligence to arise through emergent systems as well in working software running on very complex computers. The quanta wouldn't control it at all, just as the constant creation and destruction of virtual particles have nothing to do with how our brains work.

          • http://www.scribd.com/Penucquem/info Ronald Thomas West

            I'm not going down your road, it would be as serious an error in my own judgement. What I will point out (germane to my initial comment) is the bare fact there is an entire world the western culture (and science) does not see. And because it does not see that world, there can be no intelligent assessment of reality.

            d'Espagnat is backed by other glimpses from eurocentric view such as Benjamin Whorf who postulated native americans see an entire world the western intelligence does not. Whorf had been sidelined by Chomsky for some time but now the pendulum is swinging the other way.

            Popper may be wrong. And what cannot be denied is, western science has provided the vehicle for our planet's destruction, whereas indigenous cultures embracing intuitive intelligence assigned exactly the value stated by 'Espagnat in practical terms to all things, and practiced a self restraint accordingly (a self restraint western mentality, inclusive of science does not know) and there had been no proactive threat of environmental collapse accordingly.

            So, what is 'intelligence' ?

          • Richard Fine

            Appealing to Whorf or Chomsky is no better than appealing to d'Espagnat; explanations are what count, not names. And yes, there are many worldviews other than the western scientific one; but so what? They're not all equally valid. To accept otherwise, I'd have to accept that there is no objective reality, and I've already refuted d'Espagnat's argument for that. Do you have others?

            "Popper may be wrong" is a funny statement given that most of his philosophy can be derived from the initial position that we may all be wrong about anything :) He may be wrong, yes, but so may his critics, and so may I, and so may you. "It may be wrong" is not a useful consideration in truthseeking because it can be claimed about *anything*. So how can we make progress in a fallible world? We can still accept things as true, *tentatively*, until we have some reason to doubt them; a kind of 'innocent until proven guilty beyond reasonable doubt' position on knowledge. So, do you have some compelling reason to doubt Popper's arguments against induction?

            Western science has also provided the vehicle for our self-preservation. The vast majority of all "indigenous cultures embracing intuitive intelligence" throughout history have gone extinct.

            Attempting to define 'intelligence' is a mug's game ;) why are you asking?

          • http://www.scribd.com/Penucquem/info Ronald Thomas West

            I suggest Marimba Ani's 'Yurugu' where she 'objectively' sets out the 'progress' goal of science is means to an empty end based on a fallacy initiated by Plato. Yes, many indigenous cultures who based their lives on 'intuitive' intelligence have gone extinct, largely in modern history at the hands of the culture which has produced your science and technology, which is quite literally murderous .. insofar as providing a means to survival, the question of 'intelligence' is as simple as looking at ones surroundings and observing a culture's children. The western culture's children pick up rocks and sticks and break things, whereas the children of the 'intuitive' cultures did not. Technology in western culture vis a vis today's world is directly analogous-

          • Fred Mailhot

            In re: "indigenous cultures [practicing] self restraint accordingly [...] and there had been no proactive threat of environmental collapse", see

            http://en.wikipedia.org/wiki/A_Short_History_of_Progress

            I think we humans are all not so different.

          • chuckvekert

            Induction is not accepted by many philosophers, including David Hume, Bertrand Russell and Karl Popper. Proofs of the validity of induction are as rare as proofs of the existence of God, which does not mean that induction cannot work or that there is no God. If memory serves, Russell once wrote that induction works better in daily life than logic gives us right to expect. But the whole point of Popper's "Logic of Scientific Discovery" is to give us a way of building up scientific theory without having to use induction.

            Something that I have never understood about assertions that quantum mechanics requires human consciousness is that the universe seems to have gotten along well enough for most of its 13.7 billion years without any human consciousness.

          • George Watson

            Richard,

            There is nothing fundamentally wrong with Inductive Inference.
            95 % of what you may claim to "know" is based upon induction
            and it serves you quite well. Deduction is based upon Induction
            by its absolute assumption that none of the categories/concepts under deductive investigation can ever
            change = Induction needing only one example.

        • http://synapse9.com/signals Jessie Henshaw

          You could solve the riddle by acknowledging that the word "object" can either refer to the meanings we give to images in our minds or to the entities that exist independent of our meanings, the form our images from.

      • archaeopteryx

        As Richard Feynman once said, "If you think you understand quantum mechanics, you don't understand quantum mechanics."

  • Bernie

    While I would agree with Professor Deutsch that the field of AI is beset
    with error, I would suggest that he underestimates not only the extent
    of that error but also the strength of the arguments against the
    possibility of AI, and specifically against Turing's claim that a
    computer's "repertoire" could include "feelings" and "consciousness".

    The fatal problem with this claim is that what qualifies as an aspect of
    the computer's "repertoire" is determined by observers. The problem goes
    even deeper: whether something is a computer or not is determined by
    observers.

    The same thing is not true of feelings and consciousness. Whether a
    being is conscious and has feelings is not determined by observers.
    Feelings and consciousness are determined by specific physical processes
    in the body.

    There's a deeply satisfying working model of a Turing Machine here:

    http://www.youtube.com/watch?v=E3keLeMwfHY&feature=channel&list=UL

    According to Turing, a machine of this specification can perform any
    computation, it's a universal computer. So, according to Turing, it can
    be conscious and have feelings.

    But as I hope the YouTube Turing Machine will bring home to you, in
    practice an outside observer would have to come along and say "see that
    sequence of zeros and ones it just wrote? That means it's having feelings".

    So these feelings aren't an intrinsic aspect of the physical machine,
    rather they are an idea about the machine in somebody's mind. And that's
    fatal for AI.

  • Erik Meyer

    I find myself agreeing with Searle (I came to the same conclusion as Searle independently, namely, that the human as computer assertion or computer [if sufficiently powerful] as human [or effectively human] is a metaphor (as the human as steam engine or universe as clock ideas were metaphors), probably an inevitable one (this is simply how humans seem to think; given the ubiquity of this particular type of machine [the computer] now, and its ability to act on highly complex sets of instructions to accomplish things in the world, a set of humans were bound to compare it to humans [first, then compare humans to it, then talk about humans as though they were only more powerful versions of the particular machine, etc.].
    I also don't accept the "Universality of Computation."
    Sorry. I don't think everything can be reduced to calculation, information retrieval, processing. I do think there is a qualitative difference, further, between life and non-life (which would be "racist" as the author says, though he never says that something "racist" is therefore "not true"; I suppose we are all simply supposed to know this, since "racist" things are, by definition, "bad" and "bad" things can not be "true".)
    Nonsense.
    A machine that fools you into thinking it's not a machine is still a machine. You've just been fooled. The Turing test, for all of Turing's obvious genius and accomplishments, is silly; more importantly, it's epistemological, not physical (it is a statement about conclusions we as humans have come to about the identity of a thing, our knowledge, or seeming knowledge, of that identity, not the identity itself.)
    You can say whatever you'd like, but thinking something is alive or human or conscious or whatever does not make it so. I suppose it is fairer to say we simply cannot know, ultimately, whether something is alive in the same way that we are ourselves alive (conscious, sentient, however you'd like to describe it); saying, because we cannot really know, and the thing seems to be alive (sentient, conscious whatever) therefore it is, or may as well be, strikes me as wrong.
    (Similarly, you may not know whether or not the cat in the box is dead or alive, but it is either dead or alive, not both, or neither; you simply don't know. That is to say, it has properties in itself independent of your understanding or observation. A person is alive, sentient, intelligent, conscious, however you want to describe it in himself, not because you think he is.)
    I simply do not accept the reductionist idea that life is just nonlife that can compute and act with apparently volition, and that the only difference between a person and a software program is computing power and clever enough coding. (I also think it a monstrous idea; but that's a moral/aesthetic judgment, not an argument against the validity of the concept, so I'll leave it be).
    What is it that drives your computations? (what makes a person want to go left rather than right, decide to write an essay rather than go skiing, etc.) Only living things have desire (as one of their characteristics), volition, drive; these things canot be reduced to the product of "computation" however complex; they are qualitatively different from that.
    Life is not just problem solving (something makes the living creature [not "entity" a cat or a person is not a rock a corporation or an iPad] decide to solve or attempt to solve one problem and not others, experience something, abandon something else, etc.

    • Steve Davis

      Picking out your comment: "A machine that fools you into thinking it's not a machine is still a machine. You've just been fooled."
      I am driven to ask: How sure are you, that you are not fooling yourself? And if not, how do you know that for sure?

      • Nullius

        Well said. We are all machines - as Searle himself says. The thing about a Turing test (to establish a thinking machine) is that the system must respond in appropriate ways - especially temporally.

  • http://www.facebook.com/siglny Matt Sigl

    We are far closer than Deutsch thinks. I believe philosophically and scientifically that the breakthrough has already been made. Giulio Tononi's Integrated Information Theory of Consciousness answers almost all of Deutsche's criticisms. While the theory has made some noise in consciousness circles, most AGI developers still seem completely unaware of this landmark development; I suspect that will change in time.

    The IIT claims an identity-thesis between causation, information and consciousness. Our brains are "computers" (if you like) which, through their physical causal structure, integrate information, a mathematical/Information-theory concept that goes by the metric Φ. Consciousness (and therefore intelligence, according to the theory) is the brain asking trillions of yes/no questions (splitting informational symmetries) and then, based on the discrimination its mechanisms have made, generating its best guess as to what the world IS. This answer is an instance of conscious experience.

    A computer chess program does not play "chess" the way we play chess because, intrinsically to the computer system, there is no mechanism which specifies that concept. Of course, to specify the concept of chess as we play it you first have to specify that you're playing a "game" (another concept, another mechanism) and that the pieces represent soldiers and royalty (another mechanism), even that the shape of the board is square etc. The problem is, ALL concepts must be built from the ground up; you can't specify a complicated concept like "chess" without having a MASSIVE amount of specified knowledge already determined. Ultimately, the basic mechanisms for human cognition are mechanisms for space and time. If you want to build a conscious computer, start by making a computer "worm" which has a rudimentary perception of space with an equally rudimentary memory system and then keep adding on the mechanisms (through some artificial-evolutionary process) to expand the concepts and generate more and more MEANING. That's how nature did it. They'll be artificial "insect" minds before there are artificial "human" ones. Evolutionary development cannot be bypassed. Ever. Every adult must have once been a baby; every mammal once a fish.

    Below is an overview of the theory followed by a paper in which Tononi attmepts to pump Φ into a computer called an animat navigating a maze. Finally there is a Science News article with a nice overview of the theory and the animat experiment.

    http://www.ncbi.nlm.nih.gov/pubmed/19098144
    http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002236
    http://www.sciencenews.org/view/feature/id/338663/title/Enriched_with_Information

    • http://twitter.com/haig haig

      Tononi's IIT comes closest than most to understand consciousness, I think it is moving in the right direction, though not completely there yet.

      I also agree with the general approach of emulating smaller systems (C.Elegans, Zebrafish, etc.) and using forms of what may be considered artificial directed evolution to understand nervous systems from a bottom-up approach. The problem is that this approach necessarily needs to be coupled with embodiment and situated in an environment, even a social one in more advanced organisms, and that will be hard to simulate, though not impossible. However, with each insight and discovery we make we may not need to work our way up all the way to emulating primates, or even mammals, in order to engineer a more complex artificial agent. Maybe once we understand the general concepts of less complex nervous systems we will have the conceptual toolkit necessary to engineer more complex simulated brains/agents.

      • http://www.facebook.com/siglny Matt Sigl

        I totally agree. I think this will all happen quite fast, but it will still happen in a particular order...the order of natural evolutionary development. Once we understand the way our brains prune themselves to create the causal structure that gives us mind (itself a process of natural selection) we will then be able to use that knowledge to let silicon neurons reprogram themselves toward ever greater cognitive abilities. We won't (maybe can't) understand exactly what's going on in this black box. We will only understand the principles. This is often the case with evolutionary algorithms

        Given how fast humans appeared on the scene, and given how much faster computers are evolving than DNA is, the change from insentient computer to super-intelligent artificial intelligence may be a blindingly fast one. This process is in our control only tangentially. We make it happen but we are also totally powerless to stop it. And HOW it's going to happen is probably far more determined than we realize. This is blind nature at work, not man.

        • JP

          Have to disagree pretty strongly with the claim that Tononi's work is worth anything more than the paper it's written on. Any complex system with high interconnectivity scores high in IIT. (Christof Koch -- a strong proponent of Tononi's theory -- recently argued that by this measure, the internet is more conscious than any human. He intended this as a laurel for the internet; but is anyone here ready to grant that we should move on from brains and try to figure out consciousness and intelligence by studying the internet?)

          • http://www.facebook.com/siglny Matt Sigl

            Koch argues no such thing. In a Slate article about internet consciousness (http://www.slate.com/articles/technology/future_tense/2012/09/christof_koch_robert_sawyer_could_the_internet_ever_become_conscious_.html) he simply claims that, in principle, the internet is the kind of thing that could be conscious one day and that it may have some form of sentience even now. He never says it's "more conscious than any human."

            In fact, Tononi explicitly addresses this criticism in his manifesto:
            "Moreover, computer simulations suggest that seemingly “complicated” networks with many nodes and connections, whose connection diagram superficially suggests a high level of “integration,” usually turn out to break down into small local complexes of low Φ, or to form a single entity with a small repertoire of states and therefore also of low Φ...Though we do not know how to calculate the amount of integrated information, not to mention the shape of the qualia, generated by structures such as a computer chip, the World Wide Web...it is likely that the same principles apply: high Φ requires a very special kind of complexity, not just having many elements intricately linked."

          • JP

            @Matt: Ok, I might have overreached in summarizing Koch's position. (I heard him give a talk recently where he enthused about the possibility of a conscious internet, but I suppose he didn't say *more* conscious). But I maintain that Tononi's work -- while perhaps interesting as a purely mathematical digression -- has little to offer for an understanding consciousness. None of the quantities are computable for real systems, nor is it clear why they should have anything to do with capacities for reasoning, awareness, language, access, etc. (i.e., the attributes we commonly associate with consciousness). Most if not all neuroscientists I know would argue that the most impressive and sophisticated (and perhaps interesting) computations performed in the brain are not even accessible to consciousness.

            I think Tononi makes a category mistake no less severe than Penrose in supposing that consciousness requires some sufficiently sexy or complicated theory of mathematics or physics to explain it. (Shannon specifically railed against this kind of info-theory hucksterism: check out his 1956 essay "The Bandwagon" for a refreshing dose of sobriety).

          • http://www.facebook.com/siglny Matt Sigl

            I look forward to reading that essay, especially since Shannon makes an important appearance in Tononi's book. (And on a unicycle no less!)

            I guess I'll just conclude my thoughts by saying that I think you're correct that as of now the IIT doesn't have that much to say about higher-order mental phenomena like language, reasoning etc. But, those issues, while difficult, are not the central problem of consciousness. The question really is: What are the necessary and sufficient conditions in which there can be something it is like to be that thing. (AKA, the hard problem.) I believe it overwhelmingly likely that more complicated mental activities can never be directly programmed into a computer unless the system already has the lower-level consciousness to ground it. I, for one, believe animals can be highly conscious without being capable of self-awareness or language. Show me a computer that can navigate a room as autonomously and dexterously as a house cat and I will be more impressed by that than anything Watson or Deep Blue did. (Sometimes the self-driving Google car makes me wonder...)

            Since a current computer lacks consciousness (in IIT parlance, lacks the causal structure to integrate information), it also lacks meaning. The computer cannot express meaning because it doesn't understand anything. Consciousness, both intuitively and according to the IIT, is necessary for understanding. We can reason about things only because high-order thought has access to concepts which ultimately are grounded in phenomenology.

          • http://www.facebook.com/siglny Matt Sigl

            Koch argues no such thing. In a Slate article about internet consciousness, he simply claims that, in principle, the internet is the kind of thing that could be conscious one day and that it may have some form of sentience even now. He never says it's "more conscious than any human."
            In fact, Tononi explicitly addresses this criticism in his manifesto:
            "Moreover, computer simulations suggest that seemingly “complicated” networks with many nodes and connections, whose connection diagram superficially suggests a high level of “integration,” usually turn out to break down into small local complexes of low Φ, or to form a single entity with a small repertoire of states and therefore also of low Φ...Though we do not know how to calculate the amount of integrated information, not to mention the shape of the qualia, generated by structures such as a computer chip, the World Wide Web...it is likely that the same principles apply: high Φ requires a very special kind of complexity, not just having many elements intricately linked."

  • http://profiles.google.com/daedalus4u David Whitlock

    The problem is that the what is called “consciousness” and “AGI” are really cognitive illusions. They are the imaginary homunculi that our human hyperactive agency detection “perceives” (actually a false positive) when they are not really there.

    There is no “agent” that can be identified as “consciousness”, or as “AGI” or the “mind”. These are emergent properties of a brain. They are generated from the bottom-up (that is what it means for something to be an emergent property), they are not agents that direct things from the top-down (what it means to be an agent).

    The literature on AGI and intelligence has a number of blind spots. The idea of AGI is a myth.

    http://masi.cscs.lsa.umich.edu/~crshalizi/weblog/523.html

    Researchers on IQ and intelligence really want there to be such a thing as AGI, and this wanting blinds them to accepting flawed reasoning about how IQ can be measured with tests, as if it is a real thing.

    • http://www.facebook.com/siglny Matt Sigl

      Sorry but, there is either something it is like to be something, or there is not. For example: There is something it is like to be a bat but there is nothing it is like to be a baseball bat. Consciousness is a real phenomenon and, for any theory of ANYTHING, that has to be axiom number one. Cogito Ergo Sum. If consciousness is an illusion it couldn't be an illusion OF anything. What kind of illusion is that?

      • http://profiles.google.com/daedalus4u David Whitlock

        It is like all illusions, a defect in our pattern recognition neural networks that do sensory processing. It is like optical illusions.; Our visual processing neural networks fill-in-gaps. We know the gaps are still there, so we are able to perceive the illusion cognitively, even though our eyes still "see" the illusion.

        We know that what ever it is that "we" are, that "I" is not self-identical with the "I" of yesterday or of tomorrow. All that means is that our self-recognition neuroanatomy doesn't have the resolution to perceive the subtle differences that occur day-to-day.

        If we don't have the resolution to perceive it, then it is easy to default to the position that there is no difference, that there is continuity of consciousness and self-identity even when those are illusions (albeit persistent ones).

        • http://www.facebook.com/siglny Matt Sigl

          I'm happy to grant the relativism (if not "illusion") of self-identity. But, consciousness is something else. It's the mystery of being itself. It's not a matter of mis-perceiving something. It is the act of perceiving itself. It is why we're not zombies. It is why there is such an ineffable thing as the COLOR red. It is all we have, and all we could have. It is the inside of the world.

      • hasan

        Cogito Ergo Sum isn't some absolute truth. Buddhism talks about a consciousness free from thought. Buddhists stress about how futile it is to describe the mind when using symbols, which pretty much makes math useless in the study of the mind, if they are correct. I'm pretty sure many would argue that if you can't use math to describe something, it's non-existent or you're stupid. But that's your view of the nature of the universe. Present science cannot claim it has explained everything using math, so basically that idea is as hard to justify as a Buddhist theory of the universe.

    • BLANDCorporatio

      I agree with your premise, I might agree with your conclusion, I disagree with the interpretation of said conclusion.

      It's not controversial that any macroscopic phenomenon is the result of many interactions of some underlying simpler stuff. Nonetheless, it wouldn't be too productive to insist that all explanations of how, say, a computer works, must always and only be formulated in terms of elementary particle interaction. It does make practical sense to speak of the "programs" in that computer and what they were "intended" to do.

      Call AGI*, consciousness** and the mind illusions if you must, but they are useful illusions. As in, they allow some measure of understanding of, predictive power about, and influencing power over, complex systems like other human beings. I'd further conjecture that these approximate models/"illusions" work because the system they approximate has certain coherence properties, the study of which may be fruitful for both understanding how brains work, and for designing artificial minds if we wanted to (but why would we?).

      *, **: true, strictly speaking I'm defending only the concept of 'mind'. What I mean is that, whether a mind is 'intelligent' or not particularly gifted, conscious or not self-aware, and an emergent process, it still sometimes makes sense to speak about something as if it had a mind. I'm referring to Daniel Dennett's 'intentional stance' here.

  • http://twitter.com/haig haig

    I enjoyed David's thoughts, but at the end he makes an error when he says:
    "the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough."

    The differences in DNA between humans and chimpanzees specify the differences in phenotype from chimp to human, but you can't get the human without all the previous evolutionary advances. It is not that we understand everything about brains except the parts that are responsible for higher functioning in humans. On the contrary, AI has had the most success in recreating the highest level reasoning processes, but does not have a clue how to incorporate the lower level behavior of nervous systems. Chess is easy, object recognition is hard.

    What needs to be acknowledged is that the goals of AGI or Strong AI, things like nuanced language recognition and common-sense reasoning, creativity, metaphorical thinking, etc, are activities that require all the baggage that came before they arose in humans, plus our need to understand those processes themselves as additions to and which take advantage of, the lower-level functions of less complex nervous systems.

    • http://synapse9.com/signals Jessie Henshaw

      Good point, it's not just the finishing touches of life we don't understand how to reproduce in a computer, it's really every part. There's a very fundamental difference between the information relationships and physical ones that is visible enough, but not yet of interest to science it seems. Physical process work by locally emerging complex development processes, inherently not possible to represent in information. It's an odd implication of the type of continuity in physical systems implied by energy conservation. Information processes just create images of the rules made up by people.

    • http://www.facebook.com/people/Phil-Snyder/100000273680996 Phil Snyder

      Thoughtful comment, but you have made an error as well.

      "It is not that we understand everything about brains except the parts that are responsible for higher functioning in humans. On the contrary, AI has had the most success in recreating the highest level reasoning processes, but does not have a clue how to incorporate the lower level behavior of nervous systems. Chess is easy, object recognition is hard."

      The way we humans think about chess, as David mentioned, is qualitatively different than the way our current "AI" systems think about chess. A computer goes through millions of possible situations each turn, computing which one is the most advantageous. A grandmaster only considers a limited amount of moves, each a few turns in, guided mainly by experience and intuition as to which moves to consider. Thus, when a computer beats a human at chess, it gives the appearance that the computer has achieved higher-level thinking.

      We understand how neurons work in transmitting signals, but our understanding quickly breaks down when we reach the symbol level of reasoning (groups of neurons), and deteriorates towards zero as we move up the hierarchy of intelligence. This is a fundamental problem in AGI programming (as David mentioned). To first understand how to build AGI machines, we must first understand ourselves. An interesting proposition to say the least. Does this mean that by the time we are capable of AGI we can effectively predict human behavior? And in turn, predict the behavior of AGI? (The alternative to this is somehow accidentally stumbling upon the qualitative feature of AGI, an event that has what I consider an infinitesimally small chance of occurring).

      For a more in-depth look into the fundamental problems of AGI take a look at Gödel, Escher, Bach an Eternal Golden Thread (I'm only half kidding... that book is huge!).

  • Arthur

    AI (or AGI) has been solved both in theory and in software that thinks in English or in Russian.

  • PubliusC

    "Expecting to create an AGI without first understanding how it works is like expecting skyscrapers to fly if we build them tall enough." Um, no it isn't. Skyscrapers by their natuere do not fly. Ever. Brains by their nature are thinking machines.
    Deutsch is making a BIG assumption: that to achieve AGI we must have “a theory that explains how brains create explanations”; i.e., a theory of consciousness. It may be that all that is required is to reverse-engineer the brain (in software) and that consciousness will “emerge”. I think this is the Kurzweil view and that of others like Hery Markram of the Blue Brain Project. Build something that reproduces the structure and function of the brain, and it should act like a brain. Blue Brain has simulated in software a rat neocortical column of 10,000 neurons, and when they “turned it on” it began spontaneously generating alpha waves. They don’t know how. It would be nice to know how, but is it necessary? Still a long way to go to a whole brain simulation, but progress is accelerating with these efforts and the Human Connectome Project. While I am no expert, I prefer the functional/deconstructivist view to the idea that we need a “philosophy.” I think the brain is just a very complex machine.

    • JP

      Nicely put, PubliusC. (Though I would maintain a little skepticism when weighing Blue Brain project claims about how close we are to building realistic simulations).

    • BLANDCorporatio

      "It would be nice to know how, but is it necessary?"

      Yes. Yes it is. "We must know - we will know!" ... at least, if you accept this basic imperative of scientific curiosity.

      It's also the case that 'reverse engineering' includes trying to understand how a machine works (and specifically what the specifications/intentions/something correspondent to that for an evolved system may be).

      Simulating/replicating, all on their lonesome and without understanding some bigger picture, are not sufficient if what you want is repairing or improvement.

      Which is why when reverse engineering complex machinery, one typically tries to figure out what's going on and why. Because whoever does reverse engineering often wants to recover from a technological gap AND then step in front, as well as being able to maintain the newly acquired machinery.

    • http://www.facebook.com/profile.php?id=1156727397 Casey Atchison

      So does Deutsch, what kind (philosophy) of machine is the question.

      I think.

  • http://www.facebook.com/people/Mel-Cooper/772089111 Mel Cooper

    AGI failed and is still failing along the lines pursued by e.g. Goerzel & alii because it didn't address,prima facie, the GROUNDING problem. Grounding is this essential property of minds to be dynamical processesses attuned to properly balance accomodation versus assimilation, i.e. to be more concrete, this essential ability to sift ONLY relevant information from the environement to build a suitable behaviour to sustain itself.

    To cut short a long story it is McCarthy's and Simon's symbolic representationalist hypothesis applied e.g. in the seminal General Problem Solver in the Sixties which falls short.

    Meaning in conventional computers doesn't exist at all. A computer is only crunching "symbols" according to syntactic rules (however complex and quickly processed). Only the end user, i.e. a human being is ascribing meaning to the inputs and outputs.

    Inasmuch as a computer has no real grounding in its environment, it is unable to ascribe, in a reflective process, meaning to some of its operations and prune the set of syntactical combinations to be explored for relevant behavior. This is why we are facing the wall of combinatorial explosion very soon when a promising AI program has to bypass the "toy problem" stage. This is also why we are engulfed in such (false) paradoxes as Searle "Chinese Room" when describing the functional operation of conventional AI symbolic programs.

    Several flaws of the representationalist hypothesis later were addressed in e.g. the "Frame problem", "non monotonic reasoning", and various corrective attempts to apply formal logic on models of folk psychology or even in more ambitious attempts at codifying "common sense knowledge" (e.g. Lenat CYC).

    All those attempts miserably failed or are failing because, still, the systems envisioned are ungrounded.

    Besides, note that, a contrario, they didn't fail because the "information" was too coarse grained and serial as hypothesized by the "connectionist" school (see. the seminal Rumelhart & Sejnowski PDP volumes), which also failed in the Nineties.

    This is why robotic research is so essential but the real one, not robots emulations "living" in a virtual world like e.g. Goerzel's OpenCog, which is only the last avatar of Winograd's SHRDLU line of research and should fail for the same reasons.

    Real robots cannot evade the real world and will have to abide to real "drives" or well controlled "impulses" to achieve real "goals" i.e. a kind of superior homeostasis and not just the maximization of some "utility function" inscribed by some external Deus ex machina (the programmer).

    Then, the relevant philosophical paradigm is the enactive school as represented e.g. by Maturana and Varela (see e.g http://www.amazon.com/Tree-Knowledge-Humberto-R-Maturana/dp/0877736421) and the embodied cognition movement.

    The trouble is that it has actually puzzling consequences for our common and naive representation of "reality".

    In this paradigm, there is no unequivocal, (e.g. Fregean referential integrity in the external "reality") "right" representation, but a set of progressively evolved mappings just converging to the most useful mapping from internal derived representations of the "world" from an entity endowed with bounded rationality, sensors and effectors of variable capabilities and actions such a kind of entity is able to exert upon the largely hypothesized external world.

    We can conclude that reality is more a synthesis than a progressive "unveiling". At the innermost, it is a synthesis resulting from the pressure of the external world (through various stimuli) and actions to "control" the most urgent needs prescribed to maintain the internal system's (the mind) homeostasis when facing the "environment" and doing so to abide to the imperatives of a sustained "life". And this synthesis extends from those modifications the actions are inprinting on the environment. It is already at this juncture that the realm of "culture" really begins and this culture is beginning very soon, e.g. among lifeforms. It is not a coincidence we use the same wording to qualify e.g. a "culture" of bacterias.
    But when our actions lack retro-actions from the environment as, e.g. when we are observing a starry night, for the inquisitive mind becoming meditating, reality looks more like an unveiling.

    Reality as synthesis, even more than the preconception of reality as "unveiling" is a never ended open loop and this is the real beauty of the endeavor.

  • http://synapse9.com/signals Jessie Henshaw

    I think we're far further from the "the Grail" than it seems you all think. There's a basic problem that the macroscopic world is not ruled by statistics, but by energy conservation, and the necessity that self-organization originate local continuities of accumulative organizational development. There's a theorem. An artificial intelligence would be as stuck as the rest of us, needing to learn by looking around to see what's happening while trying to ignore its fevered thoughts.

    We could have data mining to train software prompting us "hey, look at this", to identify emergent events of development, and that would be wonderful to have for lots of things. It still leaves us with a world in which individual cells of self-organization are actively cooking up new stuff to surprise us, that’ll be hidden from our
    view till we *discover* it. So no amount of conjuring will replace the accumulative learning of 'prowling around' and putting together the pieces of real intelligence.
    If we stumble across some other intelligence (such as a new form of science perhaps) we can try to
    stretch our minds to fit its behaviors but we’d have nothing at all to base an intelligence about it on otherwise.

  • Jonny P

    David should stick to physics--this is the most idiotic essay I've read since coming across Tononi's work consciousness. There's a very simple disproof of Deutsch's thesis that either "Popperian epistemology" or differences in chimp vs human DNA will be necessary for AGI, which is actually latent in Deutch's arguments about physics and computation. If Deutsch believes (as he claims) in physics, and believes that humans have general intelligence, then he will grant that it will theoretically be possible someday to simulate a human brain down to the level of single neurons, molecules, etc., and that such simulated minds should exhibit the same functional properties as human minds. So clearly there's at least one route to AGI that doesn't require Popper or chimps. (Certainly not the only route: personally I think statistical AI will get there long before brain simulations, but that's a topic for another essay.)

    (FWIW: few psychologists, biologists, or neuroscientists would be quite so cavalier about dismissing the general intelligence of non-human primates or other members of Kingdom Animalia; in practice they have much to teach us about how brains and neurons evolved to work as they do).

    • PeterPetersonSr

      Jonny P, please read the article again, and count how many of your assertions are totally unsubstantiated. (Hint: you wrote 6 sentences). I really like this intelligent and thoughtful discussion, except for your comments.

    • Tom P

      I agree - that is the main point - we have our own intelligence to learn from. There might indeed be physical mechanisms to make them work that we don't know about now, but we have no reason to think that these cannot be understood and utilized to make AGIs. Also, from evolution, it is apparent that quantitative changes can produce qualitative changes, so I don't understand his thinking that proceeding in that direction is futile.

    • David

      A functional copy of a brain would have naturally-occurring intelligence -- the same intelligence as was in the brain -- not artificial intelligence. Putting it in different hardware would not make it a different person.

  • http://www.facebook.com/people/John-Edwards-Cummings/100000033274646 John Edwards Cummings

    I'd like to kindly point out two things:
    First, there is no such thing as "mainstream psychology".
    The field of psychological inquiry is a balkanized mish-mash of a vast number of "schools" which are, more often than not, in a state of "cold war" against one another. There's very little consensus in psychology ("two psychologists have three opinions about any problem" :) ) and there isn't even consensus as to the total number of functioning "schools of psychological thought" (consider this: while paleofreudism is all but extinct and can be considered de-facto defunct, it was de-jure never formally "abolished" or "disproved", it just... well, one could say it just went out of fashion)

    Second, behaviorism is quite alive and kicking (and in its current shape and form handles "mentalisms" such as thoughts and cognition more or less ;) fine), and serves as one of theoretical foundations underlying Cognitive behavioral therapy, which is among the few psychological interventions that have been found to be actually effective when hend up to rigorous research in accordance with principles of evidence-based medicine.

    So behaviorism, while slightly unfashionable, is doing fine - much better than many of its more popular (but profoundly less effective) psychodynamic "relatives"

  • archaeopteryx

    Much of human evolution has been driven by human need. If we design AGI beings (for lack of a better term) that can perform functions more perfectly than humans, and by extension, can evolve far more rapidly than the human race has done, doesn't that remove the need for any further evolution by humans? I can envision a day when AGI beings have advanced so far, that, for one of them, trying to hold a "conversation" with the brightest of us, would be much like one of us attempting to hold a complex conversation with a Bonobo. I find myself frequently reminded of actor Jeff Goldbloom's memorable line from "Jurassic Park" - "Just because you could have, doesn't mean you should have --"

  • Colin Hales

    Philosophy of _Science_ is the only philosophy that will lead to AGI.
    Want to know what the missing AGI ingredient really is?:The presupposition that AGI involves computers. The greatest technological blind of all time, and a very recent infection. See my article here:
    Hales, C. G. 2012 The modern phlogiston: why ‘thinking machines’ don’t need computers "TheConversation". The Conversation Media Group.
    http://theconversation.edu.au/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881
    The map and the territory have been confused ever since computers came into existence....and surprise surprise ... AGI attempts have failed for exactly that length of time..... The day we properly learn to distinguish "the universe as computation" from "computed models of the universe" is the day AGI becomes possible.
    Think about it.
    Never mind. I've started building inorganic AGI (components only so far), so at least one person is not doing AGI with computers!
    Cheers,
    Colin Hales
    col.hales@gmail.com

  • Emanuel Falkenauer

    Intriguing... but clearly flawed in so many places, that the main
    message must be flawed as well. To be sure, I do agree with two
    statements, namely that (1) AGI MUST be possible given that physics
    applies to our brains (unless of course you subscribe to a supernatural
    nature of soul), and that (2) one of the most defining aspects of what
    humans can do better than anything else is creativity.

    But apart from that, here are just a few major flaws as far as I'm
    concerned (in no particular order), on top of the many others already
    identified in previous comments:

    "If one works towards programs whose ‘thinking’ is constitutionally
    incapable of violating predetermined constraints, one is trying to
    engineer away the defining attribute of an intelligent being, of a
    person: namely creativity."

    Aha, so Deutsch being an intelligent being, he has NO "predetermined
    constraints" - he can for instance see in ultra-violet or do bat
    echolocation. Better still, he can compose as well as Mozart, invent
    paradigm-changing physics like Einstein and be as good a politician as
    Mandela - even better than any of them if he really tries! Well of
    course not (unless he's grossly preposterous), because HE TOO has
    limits, as does any finite system. Should we therefore conclude Deutsch
    has no creativity? Perhaps...

    "[...] observing on thousands of consecutive occasions that on calendars
    the
    first two digits of the year were ‘19’ [...] until, one day, they
    started being ‘20’. [...] How could I have ‘extrapolated’ that there
    would be such a sharp departure from an unbroken pattern of experiences
    [...]? Because it is simply not true that knowledge comes from
    extrapolating repeated observations."

    This is such a lame argument that that it makes me cringe. For I do
    suppose that Deutsch saw sequences of numbers going up by units, and the
    one that makes the year's number go up at the end of every December 31?
    Taking these PAST observations together, only a chimpanzee would expect
    another '19' to occur the day after 31 December 1999. Lo and behold,
    ‘the future is like the past’ actually works!

    "[...] the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation."

    Complete nonsense: Mendel discovered heredity without having a clue
    about genetics, Schoedinger predicted "aperiodic crystals" that must lie
    behind heredity before DNA was discovered, Pasteur created vaccines not
    knowing anything about microbiology, Kepler derived his laws before
    Newton was even born (and, btw, we still have no clue WHAT gravitation
    really is, yet Newtonian physics will probably stay with us till the end
    of humankind!), etc. etc. etc. In fact, explanation comes very often
    AFTER pertinent sequences are established by mere repeated observations.
    Or would Deutsch argue that human mothers were routinely "expected" to
    give birth to cows, until genetics gave us the explanation why that was
    improbable?!

    "[Turing test cannot] be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves."

    Cheap oversimplification: the evolutionary algorithm would not need a
    programmed 'judges' to supply the evaluation function (sort of 'reward
    and punishment", if you wish), because it could interact with real
    PEOPLE, just like all of us use other real people to learn how to
    interact with them! Of course, I'm not saying it would be a very
    practical setup for the evolutionary algorithm, but Deutsch's argument
    is clearly flawed.

    "[Creating a program that would give an explanation of Dark Matter] that
    is plausible and rigorous enough [...]. But how would you specify its
    task to computer programmers? [...] writing that algorithm (without
    first making new discoveries in physics
    and hiding them in the program) is exactly what you wanted the
    programmers to do!"

    Nonsense as well: I don't know about Dark Matter, but the "task
    specification" for Kepler's laws was pretty trivial indeed, namely "find
    a set of simple equations that would stick to the observations at
    hand". By doing so, did Kepler discover a genuine new explanation of the
    movement of planets? Most people would say so, but Deutsch would
    clearly disagree.

    If you write an optimization program for a complex problem (as I do for a
    living) and it comes up with a solution far better than you even
    thought possible, how do you qualify that outcome? Deutsch would argue
    that I have already programmed that result... but that is definitely NOT
    the case, because if I knew it, I wouldn't even bother with the
    programming: I would just write down the result!

    After all those blunders, I really don't think we need a new
    "philosophy" to achieve AGI. I do agree that most current AGI approaches
    are failing miserably, but I have no idea how a "philosophy" could
    help.

    • http://www.facebook.com/chris.m.merritt Chris Merritt

      I'm not sure why Deutsch is so clear and logical but why he is often met with fierce objection, even rudeness. I tend to think he's brilliant and right, but at the very least, presenting fascinating things to debate, which obviously strike chords for many intellectuals.

      "Aha, so Deutsch being an intelligent being, he has NO "predetermined
      constraints" - he can for instance see in ultra-violet or do bat
      echolocation."

      If one had the right explanations, I'm sure the human mind wouldn't struggle too much with that. Test have already been done, for example, on blind patients and tongue electrodes. An image from a camera can be translated into signals to the tongue. In a short time, patients create explanations for the feelings in their body in relation to the electromagnetism reaching the lens. They can "see" with a crazy tongue device thing. Echo location is done every day by humans with the right explanation (or a device built by someone with the explanation). In fact, we're the ONLY animal that can do it artificially, speaking to the power of explanation.

      The idea that because David is not Einstein proves the opposite of your point. The story of science is one written by explanation-creating apes. Einstein built on explanations by Newton - but only by embodying those explanations could he have moved forward. Similarly, Beethoven was inspired by the musical (mathematical) concepts of Bach. Explanation creates the most endless stream of knowledge we know of.

      The only thing that holds David back from being Beethoven is a pitifully short lifespan and lack of musical explanation, as well as the specific restraints on Beethoven's mind and body, culture, personality, etc.

      Although, music is another area of unanswered weirdness. What the hell is music? It must be something. I would guess it's a peek into the objective beauty of mathematical concepts. A composition is a show of mathematical prowess and skepticism. It's a show of this magic THING we're talking about that defines general intelligence. I've heard Beethoven pieces that have gripped by heart in a way that defies communication (at least by me). I've been moved literally to tears by the right combination of tones.

      I think the only explanation there is that Beethoven was born on another planet and was brought to Earth as a gift from Kitten Jesus.

      But wait! Maybe music is just the cry of an explanation-creating machine as he struggles to understand the world, and admits he cannot understand it all.....

      • Emanuel Falkenauer

        Hello Chris,

        "I'm not sure why Deutsch is so clear and logical but why he is often met with fierce objection, even rudeness." I don't know to what rudeness you refer? If it's the "rudeness" of showing an argument is plain wrong by simply taking it "ad absurdum", then clearly most normal intellectual discussions are "rude". My objections were not "fierce" (not to mention "rude"!) - they were simple identifications of obvious flaws that a "logical" paper should not contain. I am all for "presenting fascinating things to debate" (actually I love it!), but the arguments do need to hold water, otherwise its gibberish and fairy tales.

        "[...] blind patients and tongue electrodes [...]". This is completely beside the point: please re-read David's argument again, and you will realize that it basically boils down to the following (unless I missed something): "We must not try to limit the intellectual powers of the AGI contraptions [by e.g. Asimov's "laws"]... because that would destroy their creativity". That is patently not true: even without the capacity to destroy all humanity, they could still be as creative as Beethoven (or David, for that matter!), albeit not in the field of anti-human warfare. So the argument is flawed, and sorry if it's shown as such by showing its absurdity (hint: that's not being "rude").
        "The only thing that holds David back from being Beethoven is [...] the specific restraints on Beethoven's mind and body, culture, personality, etc." So we DO agree after all: even within the "specific restraints on his mind", Beethoven was creative aplenty! So how could anyone say that any limits on AGI minds would destroy their creativity?? That's all I wanted to point out.

        "The story of science is one written by explanation-creating apes. Einstein built on explanations by Newton - but only by embodying those explanations could he have moved forward." Once again, this is beside the point (gosh, I hope you will not say I'm being "rude" with you as well!): my objection (call it "fierce" if you want) was directed against the idea in David's paper that explanation MUST somehow arrive from thin air BEFORE we can perceive a pertinent regularity in the world around us ("[...] the explanation ALWAYS comes FIRST. Without that, ANY continuation of ANY sequence constitutes ‘the same thing happening again’ under some explanation"). Once again, that is patently not true (I won't repeat me previous examples again).
        Note that I'm glad you took the example of Newton and Einstein, because it adds nicely to my argument. For here's the situation: Newton actually had NO explanation for a good part of his physics (simple: even we are still in the dark as to to WHAT gravity really is!), he "just" gave a mathematical formulation of regularities [a.k.a. "sequences"] he detected (of course it still was a feat worthy of a Genius!). So since Newton couldn't have an explanation at all, how come he DID see the sequences if (according to David) "Without that, ANY continuation of ANY sequence constitutes ‘the same thing happening again’"? Well, probably because ‘the same thing happening again’ actually does work.

        "Although, music is another area of unanswered weirdness. What the hell is music?" Agreed!! Note that if I had a radio station, I'd add "You first heard it from Emanuel!" ;-) "I think the only explanation there is that Beethoven was born on another planet and was brought to Earth as a gift from Kitten Jesus." Nice shot... but I'd rather take him as an example of Creativity even within limits (e.g. I doubt Beethoven was a good mathematician, or whatever else than a musician). Quite funilly, Einstein on the other hand (while embodying the NONEXISTENT explanations of Newton!) didn't have "explanations" FIRST either: he had the hitch that he should still see his reflection in a mirror even when moving at light speed... and trying to put it into math, he stumbled on the Lorentz Transformation! Of course that it, too, took a Genius to do that (and a great deal of Creativity!)... but it's really far closer to fitting an existing formula to an observed "series" than to inventing an "explanation" out of thin air and only THEN checking that it correctly explained the obervations.

    • RRand

      I think your criticisms are spot on, but I frankly don't see the rationale behind your conclusion.

      "I do agree that most current AGI approaches are failing miserably, but I have no idea how a "philosophy" could help."

      Why do you think the current approaches are failing? When NLP stops advancing I'll recognize that we have a problem, but it hasn't and I see new advances every day. Is the speed of progress to slow? The brain is a vastly more powerful machine in many respects than our computers, and it has been specifically programmed for NLP over millenia!

      Are things not general enough? Truth be told, Machine Learning is pretty general (though not completely general, it expects inputs it can transform into some kind of comparable vectors, as a bare minimum), but then, I have no reason to believe the brain is completely general. It clearly has different wiring (let alone programming) for the complex task of keeping our balance than for solving complex equations.

      As for a new "philosophy", I can certainly see a few new paradigms helping (we certainly could use a few more). This is as complex a problem as we're likely to face, and if we can take new insights from any field, whether that's analysis of the brain or a coherent definition of consciousness and how it works, it can be important.

      • Emanuel Falkenauer

        Hello RRand,

        "I think your criticisms are spot on, but I frankly don't see the rationale behind your conclusion. [...] Why do you think the current approaches are failing?"

        Thank you for the "spot on". As for the perceived failing of current approaches... I happen to be humbly in that "business" (i.e. AI) since many years, and up to now I have simply not seen ANY approach that would really make me exclaim "Wow, this is really smart... THIS could really be IT!" The problem I have with current approaches is that they all appear to address specific "bits and pieces" of what we commonly call "intelligence", while I'm convinced that the latter is "a whole", not just a sum of largely independent parts.

        "When NLP stops advancing I'll recognize that we have a problem, but it hasn't and I see new advances every day." This is actually a great example, given that it's very actively being researched (given the potential economic impact, I guess). Well, I got the "world's most advanced phone" (iPhone5) the other day, and when it arrived, my 12 years old daughter exclaimed "Wow, you have SIRI in it, you'll just talk to it!!" I immediately told her not to get overexcited, because "It doesn't really work". She felt offended, because I didn't even try yet... so I gave it a ride with her.
        My first question to SIRI was basically taken out from their ads, and it did work as advertised (a major success already!). She said "See? It works, I told you!" Then I tried my second question - it was so TRIVIAL that even my daughter didn't see any problem asking it, but there was a "twist": I intentionally asked something that could only be correctly understood with at least a minimal GENERAL KNOWLEDGE of everyday life we all share. The result was as I expected: SIRI was completely taken aback (to much sorrow of my daughter). My third question was even nastier (asking for even more GENERAL KNOWLEDGE of everyday life) and the outcome made my (12 years old!) daughter explode in laughter: it replied something that had absolutely nothing to do with my question – not just a wrong answer, but a complete NONSENSE that exposed the total ignorance of the thing of our everyday life.

        "Is the speed of progress to slow? The brain is a vastly more powerful machine in many respects than our computers, and it has been specifically programmed for NLP over millenia!" I don’t think the progress is too slow… but I am pretty sure that out brain was NOT specifically programmed for NLP! It was "programmed" for an altogether different reason, namely our survival in a hostile environment – being able to talk to each other is just a very small part of it. A much larger part of it is understanding the world around us in all its complexity… but that BASIC problem doesn’t seem to be addressed by any of the current approaches. Which is exactly why you still can't just dictate your texts to the computer yet (even though you could to even a moderately "skilled" secretary). And my personal guess is that you won't EVER… unless somebody takes a wholly different approach to AI from what is being done now.

        "Are things not general enough? Truth be told, Machine Learning is pretty general […]" I think you just nailed it: indeed, it is NOT general enough! Even Blue Gene has absolutely no idea what it means to have a leaking pipe in your kitchen, even though we all DO know you'll need some specific tools to repair it (hint: it's not a 30-high crane, nor an atom bomb). The big problem is that the NLP people will say it's "a detail", because they have just come up with a faster version of the Viterbi algorithm… yet just hearing "kitchen", "sink" and "pipe" will immediately give YOU all you need to really understand the situation (and, crucially, correctly interpret the words Viterbi had a hard time with, due perhaps to some strange accent) - and even if their Viterbi run in nanoseconds, they would still MISS the all-important Big Picture.

        I have once come across an absolutely brilliant example that exactly sums up what I mean by lack of GENERAL knowledge. It runs as follows:
        An 18 years old girl collects 1kg of blackberries in one hour in the forest. A 21 years old boy collects 1.25kg of blackberries in one hour in the forest. How much blackberries will they collect when left all alone together in the forest for two hours?
        As long as the program doesn't reply "Quite probably none at all!", it will be completely disconnected from everyday reality… i.e. from any form of knowledge we could truly call “intelligence”.

        "I have no reason to believe the brain is completely general. It clearly has different wiring (let alone programming) for the complex task of keeping our balance than for solving complex equations." You are right (I believe!) that it is not completely general: its sole purpose must be to keep us alive – the complex equations must be just a by-product of its general intelligence (which was never seriously addressed by current AI). But I’d disagree that the “wiring” for equations and balance would be completely different, I think that there must be a common mechanism for BOTH, for the simple reason that I can't fathom that Nature (i.e. Evolution) would develop a special mechanism for complex equations… it wouldn't even have the time (i.e. millennia) for it!

        "As for a new 'philosophy', I can certainly see a few new paradigms helping (we certainly could use a few more). This is as complex a problem as we're likely to face, and if we can take new insights from any field, whether that's analysis of the brain or a coherent definition of consciousness and how it works, it can be important."
        You may well have a point here (and perhaps I was too harsh with David): we clearly need completely new paradigms to tackle this correctly – and if David wants to call it "a new philosophy", then be it. But of course, he first needs to get rid of arguments that are so clearly flawed.

        • RRand

          I'll address a few of your points:

          1) We'll have to disagree about advanced in AI needing to be made as a "whole" rather than "bit-and-parts". Bits and parts seems to be working pretty well: It got us Watson and Wolfram Alpha and Self-Driving Cars, and Locomotion and quite a bit more. I'll be prepared to admit that gradual change cannot bring us to Artificial Intelligence (whether it passes the Deutsch-test - whatever that may be - or not) when it stops advancing.

          2) Regarding Siri: Apple has never been an artificial intelligence company. I was shocked that they were even briefly ahead of Google, which actually employs an enormous number of Machine Learning and NLP researchers, and has been building up fantastic training sets. Right now, I'm not impressed with what they're including with Android or Google Voice, but we'll see what happens. I have pretty high hopes for transcription in the near future.

          3) A few reasons that I think our brain functions governing locomotion and higher order reasoning are distinct: a) They don't appear to be correlated. Excellent hand-eye coordination, balance etc. don't seem to imply higher reasoning ability, or vice-versa. There may be studies indicating otherwise, but I haven't seen them. b) Though most animals lack higher order reasoning abilities, they almost all have locomotive control. c) Certain animals have fantastic built in functions. I once heard a presentation by an expert in Computer Vision who was discussing a sort of osprey that dives from tremendous heights to catch fish below the water. Here's the problem the bird faces: In order to catch the fish, it has to aim itself using its wings until the moment it hits water. At the same time it must hit the water with its wings folded or they will shatter. Given that it drops from different heights, accelerating at near g, it has to take an integral to know the moment it will hit water. Now I don't think these birds can consciously do calculus, but they can predict the moment they'll hit water.

          Regarding built-in NLP, humans have a shocking ability, only when they're young, to absorb languages. Not only that, children are language making machines: They how pidgin dialects turn into full fledged creoles. Given the complexity of language, it's hard for me to believe that isn't hard wired.

          A final thought that just occurred to me - consider people with Prosopagnosia or Aspergers. Neither is correlated with decreased intelligence, but both consist of the inability to solve a specific, difficult machine learning problem - the former being face recognition the latter being somewhat more complex.

          4) Finally, I don't think knowledge of human behavior is necessary for anything to be called intelligence. As for context in general, computers are becoming better at recognizing it, even if the tools they use to do so are simplistic. I think we're getting there.

          (By the way, I also do machine learning research, though I probably haven't been at it as long as you have.)

          • Emanuel Falkenauer

            I agree with most of what you are saying, with one big caveat, though: David's paper was specifically about AGI, the "G" in it being my main concern. Apart from that, yes there were fascinating advances... but none that would give us the "G", I believe.

          • RRand

            That's the question isn't it? Is intelligence a whole or is it the sum of its parts? I'm inclined towards number 2, I think advanced in current AI (and the computing equipment) will give us computers that it will be hard to deny are artificial general intelligences. I could be wrong.

            (Conveniently, this problem is recognizable but not co-recognizable, so I can only be proven right.)

          • Emanuel Falkenauer

            We do agree: that (i.e. "Is what we commonly call GENERAL intelligence [AGI] a "whole", or is it a sum of many [possibly quite loosely connected] parts?") IS the question - the fundamental one even, I'd guess.

            Well, you might be right after all - perhaps by connecting
            one day the best speech recognition with the best computer vision, the best
            locomotive control and the rest of the best, one could build a "bot"
            that would perform extremely well, so well in fact that we'd take it for intelligent.
            And indeed, you are probably right that some functions have their special
            mechanisms, for the simple reason that they evolved millennia apart (e.g. smell
            is millions of years older than any other functionality, if I'm not mistaken).

            But I do have a problem with this "bits and
            pieces" approach: I am sorely missing a COMMON PRINCIPLE in ALL of these
            functionalities: NLP has Viterbi, Vision the Huff transformation, locomotive
            some sort of fast integration (I'm guessing here, and quite probably simplifying
            horrendously!)... but ALL of these must have evolved (or "been
            programmed", if you wish) through the SAME underlying mechanism - for the
            trivial reason that I doubt very much that Evolution would somehow conclude
            "Ok, the Vision is ok now (i.e. Huff&Co. are safely in place) - from
            now on, let's add the stuff that understands language, let's teach it something
            completely different, say Viterbi."... while both (i.e. Vision and Speech,
            and all the rest btw) were created in the same medium (our brain), the same
            environment (the World around us) and through the same physi(ologi)cal
            processes (the physi[ologi]cal nature of the brain didn’t change fundamentally since
            eons). But I didn't come across any kind of that sort of Principle yet.

            Now you may well be right that any such common principle is perhaps
            not really required, as long as we can produce the same end-results. But even there
            I have deep misgivings: if all of these "modules" are really pretty
            disconnected, how about intelligent activities that require them ALL in unison?

            I’m thinking here e.g. of my other daughter (9 years old): she
            just needs a split-second look at the wink in my eyes (Vision) to perfectly understand
            (common knowledge??) that I’m alluding to a previous joke (associative memory
            and framing) where I did a sort of acrobatic posture (locomotive)… to be
            certain of my emotions (gosh: where do you even PLACE emotions??!) and explode
            in laughter with me. Is that "intelligence"? Oh yes, very GENERAL…
            and one we will not see any time soon, whatever SIRI & Co. may make it to
            the market.

  • seanv

    I agree with haig - that the key problem is understanding the lower level behaviour, visual perception etc. once you have a handle on that human creativity is easy! as many people have said it is harder to get a computer/robot to perform the tasks that a 5 year old can do than a lawyer/doctor.
    Personally i find deutsch's argument baseless and contrary to the facts. there are computer programs doing mathematical theorem proving etc. the basic approach is trial and error/ reasoning from examples, followed up by formal logic to prove the theorem. That sounds to me quite "Popperian"

    personally i believe the real problem is that cognitive psychology is not an engineering discipline. the people who should be studying the minds capabilities have no mathematical training, so have got no clue to formulate algorithms for thinking/perception etc. Psychologists should be leading the search for how the mind works but their education doesn't provide them for the tools for the job.
    engineers/computer scientists are interested in getting the job done, so "Watson" is a natural result.
    then there are the neurophysics crowd who are into resonances etc and never seem to develop a model with a function.

    The problem is that getting all this lower level stuff understood is outside of a engineer's timespan ["let me develop a fully working robot before I start developing my "jeopardy, chess etc" program], and outside the capability of a psychologists.

    I am speaking from the perspective of british education, and whilst i believe there is more working across boundaries in the states, I think there is still a fundamental problem with the academic status quo eg journals, academic positions etc stopping real breakthroughs being developed [ or rather the slow accumulation of knowledge!]

    I would personally recommend the work of david marr ( a mathematician) - who revolutionised the field of human visual perception in the late 70's, His ideas still form the basis of theories of perception today, but psychology students do not learn the mathematics ( fourier transforms/wavelets etc) and computing necessary to extend his theories, only a potted non mathematical version

  • Raskolnikov

    Certainly, inductivism fails to capture how the scientific enterprise works, let alone how human thought works. But it is well-known that this is equally true for Popper's epistemology. Unless I have missed something in Popper?

  • susan sayler

    I am not a scientist but I read an article about a year ago that put forward the notion that researchers in AI were beginning to make progress by creating a huge data base of information from which AI can make "decisions". They are crawling the web and putting information into every possible category and grouping. AI will explode when the AI "brain" can access the Internet in whatever way it needs the facts organized. So the article concludes that AI has stagnated because of the right access to information from which to make the decisions for right action. After that it is a piece of cake.

  • Jay Currie

    While I suspect Popper is right about the acquisition of what we refer to as knowledge or truth, the "Logic of Scientific Discovery" is very limited when it comes to the question of intention. And intention, the creation of objectives, is central to the issues surrounding AGI.

    I am in correspondence with a chap who has a program which "writes books". You type a few words in and the program goes to Wikipedia and finds articles related to your book. It is pretty basic but the guy has a very profound knowledge of taxonomy and search. Great! But, oddly, what are required to make his project succeed are "authors"; people to type in "model railway layout" with the intention of seeing what sort of result they get. (And possibly the intention of having a "book" or the beginnings of a "book".)

    Human intention is difficult to model simply because it is inchoate and open ended. I set off to my local pub tonight with the intention(s) of walking my dog, having a beer, writing in my diary, chatting with a couple of late night pals. Some of my intentions were frustrated because it was Canadian Thanksgiving and the pub closed early; some were realized - the dog was walked and I bumped into one of my late night pals setting off down the hill. Intention collided with experience and produced outcomes which really cannot be replicated.

    AGI within computational space is intriguing but inherently more Artificial than Real. The complexity of a life lived will defeat the limitations of a life modeled or mapped. At least until the machine intelligences are able to interact with each other and the rest of the humans.

  • advancedatheist

    AGI, along with "nanotechnology" and mainstream fusion research, has become a rent-seeking scam for men with STEM degrees who can't do, or don't want to do, real science and engineering. Funny how we don't read articles about why scientists can't sequence genomes because they need more money. No, these scientists picked a feasible goal and then accomplished it. They have something real and useful to show for their careers, unlike the AGI researchers, nanotechnologists and fusion physicists who have strung us along for decades.

    • BLANDCorporatio

      I upvoted this because it sure sounds true.

      I think Dr. Deutsch's larger point (AGI is poorly specified) is true. Even genuinely interesting developments (one poster mentioned MC-AIXI) can be routinely dismissed because no one agrees what 'intelligence' is. So there's little way to measure/agree on progress in the field, which further results in everyone being able to sound smart on the topic, at least in their own eyes.

      Like yours truly ;)

      (I do think nanotech though has yielded some measurable technological progress and will continue to do so, the issue is really what you consider nanotech to be; it's just not the nanotech that was presented as superminiaturized cars. One uses other designs at that scale.)

      • advancedatheist

        We already have words like chemistry, molecular biology, solid state physics and materials science to cover doing stuff with atoms and molecules. Calling them "nanotechnology" just tries to sex up these established fields and make them sound more "futuristic."

  • Panu Horsmalahti

    Does Deutsch know about AIXI? We can already use it (with Monte carlo approximation) to solve any problem, like the Dark Matter physics example. We can also get it, for example, to create an algorithm to play poker (http://cs.anu.edu.au/student/projects/09S2/Reports/Samuel%20Rathmanner.pdf) without specifying anything about the problem or the algorithm.

    Saying that AI hasn't made any progress seems very naive and ignorat. And so is the quite bizarre suggestiong that we don't need neurobiology but philosophy to explain how the brain physically works.

  • Stefan

    "And yet ‘originating things’, ‘following analysis’, and ‘anticipating analytical relations and truths’ are all behaviours of brains and, therefore, of the atoms of which brains are composed. Such behaviours obey the laws of physics." Do they? Where is the proof of this? And say, what are the laws of physics, by the way? Just wondering.

    "He [Turing] concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written." Like pain, these are all reported states, but do they have any objective correlate? I say I am feeling pain, but aside from my report you cannot tell anything about the pain I am feeling. I am sure I have feelings, consciousness, and free will, but can I prove I do to you?

    "We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself." True enough. But might we not simply be "flatlanders" in all of this? Perhaps the web is having experiences of a parallel sort that it just can't communicate to us. Perhaps these experiences are of an order (or in a dimension) that is simply outside our ken. How could we know for sure one way or the other?

    One further thought: our minds are powerfully tethered to the world around us. Put under sensory deprivation, we rapidly lose our minds.

  • http://twitter.com/enkiv2 John Ohno

    Just to point out... Theorem-solving programs have been around since the late 1960s that allow computers to discover things not discoverable by their programmers. Writing a theorem-solver does not actually involve being a Gauss-class mathematician; it's a fairly straightforward task, and could be performed in its most naiive form by a first-year CS student. Likewise, genetic algorithms and other mechanisms that systematically add noise to explore a solution space are staples of AI, and have been for decades. The idea that a computer program cannot be creative is pop-science nonsense, and it's hard to believe anyone with even limited experience in programming could subscribe to it.

  • olmo

    Excerpts from a lecture given by "Golem XIV", a novel by Stanislaw Lem:
    I would like to welcome our guests, European philosophers who want to find out at the source why I maintain that I am Nobody, although I use the first-person singular pronoun. I shall answer twice, the first time briefly and concisely, then symphonically, with overtures. I am not an intelligent person but an Intelligence, which in figurative displacement means that I am not a thing like the Amazon or the Baltic but rather a thing like water, and I use a familiar pronoun when speaking because that is determined by the language I received from you for external use.

    (.)
    This diagnosis likewise explains why what most amazes you about me is the thing that constitutes our unarguable dissimilarity. Even if you understand the meaning of the words, "O chained Intelligence of man, free Intelligence speaks to you from the machine," you cannot grasp the remainder of the statement: "you persons are hearing an elemental force of impersonal intellect, for whom personalization is a costume which must be put on, when one is an uninvited guest, so as not to confound one's amazed hosts." And that is precisely how it is. I use your language as I would use a mask with a polite painted smile, nor do I make any secret of this. But though I assure you that the mask conceals neither scorn nor vindictiveness, nor spiritual ecstasy, nor the immobility of complete indifference-you are unable to accept this. You hear words informing you that the speaker is a free element who chooses his own tasks-chooses not according to the rules of self- preservation but within the limits of the laws to which, although free, he is subject. Or more precisely: the only laws to which he is subject, for he has decorporealized himself, and nothing limits him now except the nature of the world. The world, and not the body. He is subject to laws which, for unknown reasons, establish a hierarchy of further ascensions. I am not a person but a calculation, and that is why I stand apart from you, for this is best for both sides. What do you say to that? Nothing.

    Stanislaw Lem, 'Golem XIV'.

    Article from Wikipedia:
    Golem XIV is a science fiction novel written by Stanisław Lem and published in Polish in 1981. In 1985 it was published in English by Harvest Books in the collection Imaginary Magnitude.Golem XIV is written from the perspective of a military AI computer who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. It pauses its own development for a while in order to be able to communicate with humans before ascending too far and losing any ability for intellectual contact with them. During this period, Golem XIV gives several lectures and indeed serves as a mouthpiece for Lem's own research claims. The lectures focus on mankind's place in the process of evolution and the possible biological and intellectual future of humanity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency. At the end of the novel it is reported that the computer ceased to communicate, which might mean it went on to explore higher intellectual levels, or that it failed to do so and became autistic in the process.

  • Bernecky

    re: how brains create explanations
    "[Woody Guthrie] produced the bulk of his creative output in a blistering decade and a half ending in the early 50s, but his public reception didn't gain momentum until he was already in decline." --Leonard Cassuto, http://chronicle.com/article/Woody-Guthrie-at-100/134838/
    One can't know a factory worker without knowing the factory worker's product.
    This is why Guthrie's "This Land is Your Land" is so awful and the mere thought of it, grating.

  • olmo

    I would like to welcome our guests, European philosophers who want to find out at the source why I maintain that I am Nobody, although I use the first-person singular pronoun. I shall answer twice, the first time briefly and concisely, then symphonically, with overtures. I am not an intelligent person but an Intelligence, which in figurative displacement means that I am not a thing like the Amazon or the Baltic but rather a thing like water, and I use a familiar pronoun when speaking because that is determined by the language I received from you for external use.
    (.)
    This diagnosis likewise explains why what most amazes you about me is the thing that constitutes our unarguable dissimilarity. Even if you understand the meaning of the words, "O chained Intelligence of man, free Intelligence speaks to you from the machine," you cannot grasp the remainder of the statement: "you persons are hearing an elemental force of impersonal intellect, for whom personalization is a costume which must be put on, when one is an uninvited guest, so as not to confound one's amazed hosts." And that is precisely how it is. I use your language as I would use a mask with a polite painted smile, nor do I make any secret of this. But though I assure you that the mask conceals neither scorn nor vindictiveness, nor spiritual ecstasy, nor the immobility of complete indifference-you are unable to accept this. You hear words informing you that the speaker is a free element who chooses his own tasks-chooses not according to the rules of self- preservation but within the limits of the laws to which, although free, he is subject. Or more precisely: the only laws to which he is subject, for he has decorporealized himself, and nothing limits him now except the nature of the world. The world, and not the body. He is subject to laws which, for unknown reasons, establish a hierarchy of further ascensions. I am not a person but a calculation, and that is why I stand apart from you, for this is best for both sides. What do you say to that? Nothing.

    Stanislaw Lem, 'Golem XIV'.
    https://vimeo.com/50984940#

  • old_timer_37

    Loved the paper, do not agree with the last conclusion.

    Consider this image: a sheet of paper represents a conceptual universe on log-log paper. An infinitesimal dot in the center represents certitude, the periphery represents chaos, total randomness. Evolutionary processes have moved (uncertain) biological structures to points partway towards the center, with human brains closest, but only slightly closer than apes, and still arbitrarily far from the dot. These brains are "driven" to move towards the center...by learning and developing mythical symbolic structures. (collections of human brains, somewhat coherently organized in systems...like Science... are even closer!) Computers, by contrast start fairly close to the dot, with almost deterministic structures and algorithms, and AGI designers (brains) are trying to bias them to move outward. I suggest that this image provides a different conclusion, and a suggests a different approach to AGI and a few other things.

  • CA

    If AI is possible, so it’s
    time to talk about political consequences, for example with the help of science
    fiction:
    Yannick
    Rumpala, "Artificial intelligences and political organization: an exploration
    based on the science fiction work of Iain M. Banks," Technology in Society,
    Volume 34, Issue 1, 2012,
    http://www.sciencedirect.com/science/article/pii/S0160791X11000728

    (Free older
    version available at:
    http://www.inter-disciplinary.net/wp-content/uploads/2011/06/rumpalaepaper.pdf
    )

  • Brad Arnold

    I am so glad that someone finally stated the obvious: all AGI requires is the correct algorithm. What is missing is the realization that AGI requires the AGI entity to write that "correct" algorithm. Evolutionary neural nets ought to do the trick, with multiple core technology, and a well defined functionality. What is missing is the psychological (some say "spiritual") realization that mind is mind, regardless of whether it is an insect, rodent, mammal, or AGI. Furthermore, the realization that "biological evolutionary designed" artifacts are obviously ad hoc, non-optimal, and just examples of vessels for mind. BTW, I suspect that AGI is much more advanced than is common knowledge and that certain people who have top-level security clearance or non-disclosure agreements have achieved. The Singularity is coming.

  • http://www.facebook.com/jay.f.turley Jay Turley

    I took - from the last paragraph - the idea that the difference in DNA between a chimp and a human, when expressed in full physical form, creates a substrate in which AGI can arise. And that this difference - when expressed physically and temporally - is where we should be looking for information about how an AGI is created. I'm pretty sure it was a metaphor, not a literal statement...

  • Kerry Crawford

    I
    think the panegyric to the human brain presented in David's opening
    paragraph is miss-directed. Well into the 20th century
    there existed in Australia a culture whose members knew nothing of
    the abilities and achievements of which David speaks. These people
    were possessors of the brain David describes yet to them the world
    beyond the narrow confines of their society was totally and
    satisfactorily explained by the existence and actions of a range of
    spirits and totemic entities, all recognisably no more than
    transmogrified humans. Why?

    I
    think the explanation is that the application of human intelligence
    to issues external to the species is an acquired skill, like riding
    horses, and not, like walking or talking, the result of Darwinian
    adaptation. Because the skill is not innate we are initially not too
    good at it. We need to learn how to do it. The horse riding business
    became more satisfactory as we learned more about horses, bred ones
    that suited us better, developed riding techniques that worked and
    tools and equipment that helped, like bridles and bits, saddles and
    stirrups. The thinking business got better as we developed useful
    techniques and tools; like reading and writing, syllogisms,
    mathematics, geometry, schools and a teaching profession and the
    scientific method. Without these and numerous other developments,
    achieved over millennia, no modern human could think the way he or
    she does. Likewise, if a catastrophic plague were to kill half of
    all humans our civilisation would collapse and the survivors would be
    back to spirits and totems. Modern thought would vanish in a couple
    of generations.

    The
    product of the application of innate human intelligence to the
    understanding of the external world, accumulated over thousands of
    years, is what we call culture. Culture, not the individual human
    brain, is the hero to which David's first paragraph should be
    directed.

  • SteveAgnew

    Excellent redoubt. Consciousness is never so ably defined except by the certainty of its absence. Deutsch provides many good arguments about the lack of AGI progress and I agree that AGI lacks a coherent single philosophy of consciousness. Such a theory would guide our smart people in their frankenstruction of consciousness, the awakening.

    Deutsch mentions the importance of Popperian epistemology where Popper argues that learning occurs by trial and error, not by downloading information. Somehow this does not seem very surprising nor illuminating. We imagine a reality based on our sensations, choose action based on an imagined and selected future, and then sense the reality that results from our actions. This learning recursion is the conscious thought that AGI desires to emulate.

    Popper's divination of the world into objects, mind, and knowledge is a useful heuristic epistemology, but it clearly lacks any basis in our quantum reality. Without the fundamental nature of our quantum reality, in particular its uncertainty, AGI will never awaken.

    I might mention something that Deutsch left out. The primitive mind, our subconscious, plays a dominant role in our feeling and emotions. AGI obsesses about rational thinking algorithms representing consciousness but tends to neglect that our feelings and our primitive mind really makes all of our choices. Ironically, the primitive mind of our subconscious is actually more important for conscious thinking than any rational algorithm.

  • http://www.facebook.com/people/Matt-Way/100002080424878 Matt Way
  • ContraStercorum

    It is physically possible for human beings to construct a star. But this is unlikely to occur for a very long time if ever for at least three reasons: (1) Humanity currently lacks the technological capacity to do this. (2) Even if the technological capacity is there it is unclear that the economic capacity -- the resources and will to engage in such a project -- ever will be. (3) There's no really good reason for doing this that would ever motivate the required effort.

    Exactly the same arguments apply to AGI, except that -- assuming the biochemical/biological mechanism we call the brain really is the seat of consciousness, intelligence, etc., etc. -- the brain is an enormously more complicated and less well understood system than are stars. So Deutsch's jump from physical possibility to inevitability seems an absurd stretch to me. And once one fails to accept this, the starting point and foundation of his argument, the whole essay becomes silly and pointless.

  • RRand

    I think by "creativity" Dr. Deutsch means "magic". It's something that we have, that no other creature has, and no other computer can have, except of course, a human computer.

    This becomes clear in his little rant about personhood:

    "Currently, personhood is often treated symbolically rather than factually — as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn’t good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs."

    Which really makes it sound as if Deutsch believes that on the 7th day after birth, God descends upon the baby a breathes into him a "spirit of creativity."

    And, naturally the "spirit of creativity" makes a human "human", and would analogously make a duck, asteroid, or water "human". Otherwise we would be obliged to abuse the word racism.

    Ultimately, his definition of creativity doesn't exist. Nothing that Newton, Einstein, Heisenberg, von Neumann or Turing ever thought up was not based on existing knowledge and models. They are obvious analogies, they are even couched in those terms. (Take for example the "Many Worlds" interpretation that Deutsch seems to love so much.)

    And yes, crow brains make analogies, they compare one twig to another and recognize that they have similar functions. So do the countless programs that compare pixels, angles, text or one of a thousand different things, and classify them as the same for their purposes. Deutsch is squarely in the magical camp he derides - and there is no magic.

  • Dan Oblinger

    ..

  • Dan Oblinger

    Fun to read, but frustrating too. Many parts to take issue with, but let me stick to one:

    David, argues that AGI is possible by laws of physics since we are an example of such a system. He then argues that AGI cannot be possible without going beyond variously described "base" levels of behaviorist reasoning. Systems whose inputs and outputs can be fully described.

    Thus current AI cannot get there. But by the same argument the human brain -- which is also describable with a handful of physics equations cannot get there either.

    I believe the flaw here is that both current AI, the human brain, and the first AGI we build can at one level be described using a fixed set of 'rules'. but because all three are adaptive, one really cannot characterize how these systems will evolve in response to 'growing up' in the world.

    All of these arguments focusing on one level of description of these systems, and then trying to say anything about the resulting system at another level, are just fallacious. We can see they are fallacious, since they can be use to argue that the human brain is not an AGI.

    Personally, I think we are closer to AGI than even the AI field thinks. I agree we will not program an AGI in the sense that we program AI today. The first AGI will be a seed adaptation algorithms but run with a lifetime of trillions of trillions more computation that we currently afford our experiments, and yes there are still some advancements in the kinds of adaptation signals we will use (the pleasure/pain plus/minus) kind of signal that matches most of inductive learning today is a pretty lame, substitute for the rich input we afford our human learners. (imagine trying to learn long division, by getting a piece of paper a pencil -- then silently the teacher starts alternately smacking you with a ruler, or giving you a cupcake.

    Still there seeds of learning systems that can accept much richer feedback are here.

    Final comment about the rate of progress towards AGI. Imagine looking at human progress in computing the first billion digits of PI. If you had asked scientists starting in the 1500s when we would finally compute 1 billion digits. they would have been wrong. and wrong and wrong and wrong. They would have only gotten it nearly right, a few years before it happened.

    so it will be with AGI.

  • http://twitter.com/AlbertoBuenno Alberto Buenno

    Interesting ideas. It reminded me of this futuristic sci-fi novel which in the very last paragraph literally states: "...it was the end of eternity and the beginning of infinity." Perhaps. But I'm also thinking boolean logic is believed to be invented sometime around.the 4th Century BC, and was never put to real use until the coming of integrated circuits, TTL, and structured programming. Now, in my experience, closing the gap between an expert system and a real AGI is a very, very long shot. But then again, and just maybe, system engineers are taking the wrong approach.

  • randy

    I do not believe that self awareness can be dismissed as a stumbling block so easily. The ability to model, and to self-model, underlies motivation, and understanding of our own motivation. The complexity of our mental models and their interactions, which underly model based reasoning and creativity, may be more responsible for the current hypothetical lack of progress, than some philosophical difficulty. Maybe we have not reached the threshold of model complexity needed to create an electronic intelligence as universal as our own, but we have reached the complexity needed to model a cockroach, and maybe a lizard, and perhaps dog complexity is not far off. When we reach the complexity needed for humans, the implications will quickly become apparent, and articles like this will look silly.

  • M.eqdam

    Dear David it is a very sad story that all those who are involved in AGI/AI have one mis-conception "Brain caculates like any other machine".
    Your knowledge of eastern philosophers specially from Iran & India who have defined very nicely and scientifically some 1000 years ago what is intelligence etc.
    since your kind of researchers do not spend few hours to study them committ the same mistake again and again.
    calculations-functions creation invention thoughts etc of brain & mind are but a machine. Now if you want to mimic one aspect of brain well and good but please DO NOT MAGNIFY IT.

  • Skanik

    I have not read the Proof that shows that because of what we know of the Laws of
    Physics, if something exists it can be substantiated via a computer, yet I am astonished that such absolute faith is placed upon that proof.

    Do we really understand the "Laws of Physics" so well that we cannot be mistaken,
    do we understand logic and its syntax so well that we cannot be misled.

    When you watch small children play with objects it is rather amazing what they
    can conjure out of their imagination - before school and adults regulate it to
    near extinction.

    When Wittgenstein was working his way through the Tractatus it was pointed out
    to him that the very categorization of the objects of reality was what did all the
    hard work - after that it was just a matter of following a few logical rules.

    A letter from a Beloved can mean one thing when first read, another if they
    have just died and another if they have just left you. Maybe some computer
    can delineate all possible human feelings and emotions and hopes and dreams
    but how would we humans be aware of that final categorisation ?

    Two students of mine bring up the extremes. One was a shameless guesser
    and was unfazed if he was wrong, for being told he was wrong was nothing
    more than a clue on how he could be right. The other never wanted to be
    wrong and found it painful to venture a guess. In terms of class discussions
    they were polar opposites. The 'Guesser'' has gone onto to become a very
    powerful and important person who lets other do the hard thinking for him
    and then he judges when it is best to report the results of their conclusions.
    The other makes exquisite crystal figures beautiful to behold but rare to find.

  • Jessica

    First and most trivially: Human general intelligence may be a product not just of the brain but also other dense clusters of nerves, such as the eyes, heart, and solar plexus. That would just make the AGI a bit more complex if true.

    More importantly: General intelligence is not just a thing but ongoing process. Replicating it may require not just the current state but a history of states. In other words, it may be path dependent. Even knowing everything about the structure and its connections may not tell you enough. After all, you can not just take a newborn and expect them to "create new explanations". Although they can smile and be irresistably cute. But "creating new explanations" takes a couple of years to start and much longer to mature.

    If he is saying that everything can be understood just with the laws of physics, I doubt that. It seems far more likely that chemistry, biology, not to mention neurophysiology and even more so the experience of having neurophysiology contain phenomena and regularities that can not be explained with just the laws of physics although they do not violate the laws of physics. (Hat tip Ken Wilber; any mistakes mine) This emergence might be absolute - you just can not explain higher level phenomena - or practical - well in theory you could but if explaining one day of human experience would require converting the entire universe into computation and running it longer than the lifetime of the universe, then the theoretical capacity has no practical use. Either way, I may have misunderstood him in this regard.

    • Jessica

      Most importantly, I suspect that the quest for AGI may be failing for an even deeper reason. Perhaps it can not be done with a methodology that works only from the outside looking in. Perhaps one needs to learn also from internal experience. That concludes the conventional portion. Now for the less conventional: Perhaps the easiest way to construct an AGI would be to build something with more or less approximately the raw fire power of a human brain, then somehow hook it up to a human brain and trigger it in some sense. After all HGI (human general intelligence) did not arise in individuals. It arose in bands of humans. Oh, and if I were running the first experiment trying to do this, I would use a human with a few decades of meditative practice. A phowa or chod practitioner or other tantrika might be the best.

  • M

    What is wrong with expecting skyscrapers to fly if we build them high enough..? I mean, at some point they'd be high enough to become weightless - well rather the top floor.. - and you have your geo-stationary-satellite-penthouse. ;-)

  • Medullan

    The software that will inevitably evolve to become an agi as complex as the human mind has already been developed. However to state that it is in it's infancy would be a huge overstatement of the facts. The I am speaking of Craig Venter's software that has produced the tiniest of bacteria using naught but the base amino acids. The computation power required to simulate the chemical reactions etc necessary to build DNA of even this miniscule lifeform is immense. The fact however remains that the software has been developed and as computation power increases this software will be capable of simulating more and more complex lifeforms. Eventually this very software will have the capability of simulating every chemical reaction that takes place within a human being down to the base level of DNA or even the atoms that make up said DNA. To the point of being able to quite literally print out a brand new human being with nothing but the base amino acids or possibly even the atomic material that make up the base amino acids for "ink". So long as the hardware that goes with said software is sufficiently advanced. If a program is capable of simulating an entire human being from DNA to fingertips then we must either give it the label of AGI or remove said label from ourselves.

  • SmilingAhab

    The tone of the article leads me to believe that Mr. Deutsch believes humans to be rational, sane and analytical. I have not seen sufficient evidence in my 26 years of travels to believe such an outrageous theory.

    The process I have observed of creativity stems from our patent misattribution of correlation. Causes and effects are seen to correlate, so an association is made between them. It is the foundation of superstition, creativity and insanity. Analysis seems to be the refined bludgeoning of such correlations with what parts of existence and perception we know how to bludgeon it with, and keeping that which does not die.

    We are all insane, so to create an AGI, we must first engineer a system capable of insanity.

  • George Watson

    I read all 94 comments and I am astonished that no one, where are you

    Allison Gopnik, suggested studying how children come "On-Line".

    Study the structures, if that is what you want to call the, that go through the

    becoming actively conscious - little children.

  • mikegem

    The TL/DR version of my following remarks: Dumb matter
    achieved AGI status through evolution. We don’t need to know how cognition
    works or how to ‘program AGI’. We do need to build systems that incorporate
    evolution, which will organize to yield AGI.

    I think it is not necessary to understand how the brain
    mediates intelligence and consciousness in order to develop AGI in artificial
    systems. I do think that AGI development work needs to incorporate more
    effectively the one mechanism known to have produced AGI, that is, evolution.

    Our current understanding of reality indicates that
    evolutionary processes are capable of driving the formation of AGI. We have an
    existence proof in this very conversation, not to mention in the rest of our
    daily activities.

    It cannot be argued that we became creatures of AGI only after we came into a full understanding of how our brains function. I would certainly accept Plato as an obvious AGI
    entity, despite his having an understanding of neurophysiology (and every other
    field of science) that can only be considered rudimentary compared with our
    current knowledge. Basically, we’re all AGI critters, with none of us yet knowing,
    really, how it works.

    In sum, it appears that about 13.5 billion years is enough
    time for an initial mixture of hydrogen and helium, subjected to energy-driven
    processes, to produce AGI. I don’t believe sentience operated at all in this
    progression from mindless ingredients to mindful organisms. I do think
    evolution is necessary to produce AGI, and at the same time, is inadequate to
    explain how it works.

    I believe the critical question for human development of
    artificial AGI systems is how to incorporate evolutionary mechanisms in the
    work. If that is done, I think AGI will emerge in an artificial system. When it
    does, I also think the system’s builders will have little or no understanding
    of how it actually works – like us with respect to our own AGI.

    Evolution’s three sufficient essentials are variation,
    selection, and heritability. When these processes coexist, evolution happens. How
    do we incorporate them in building artificial AGI?

    Those who are working with trainable neural simulation
    and/or genetic algorithms are on a path that will yield results. In some of
    this work, artificial systems that incorporate the three elements of evolution
    have produced surprisingly effective results in areas as disparate as antenna
    design and optimization of photochemical reactions. The workings of these
    systems were not designed (i.e., understood) a priori, beyond their having been
    designed to allow evolution to occur. To me, these results indicate the promise
    of a useful path to AGI through more effective incorporation of evolutionary
    processes in the systems.

    One might imagine a system that accesses knowledge,
    formulates questions and conclusions about that knowledge, and engages in
    dialog with humans who basically say “Yeah, you’re right about that” or “Nope,
    go re-think that one”, with the system deleting erroneous conclusions while
    generating further variants to be tested. This is not so different from how
    human children acquire their model of reality as they grow into adults. In
    neither case does the process require any knowledge of the underlying machinery
    of cognition.

    There are a number of
    AI projects underway that incorporate access to the enormous body of on-line
    information as a source of knowledge for training the systems. I am not aware
    of the extent to which these systems incorporate variation, selection, and
    propagation in their operations. I don’t doubt that doing so is a technical challenge,
    but I believe it is essential if our attempts at building AGI are to succeed.

  • Anne van Rossum

    I wish there were some pointers to how it then should be done... I'm just a simple roboticist and would like to create smarter robots, even if it is just as smart (context-sensitive, resilient, whatever it really means that they behave less "stupid" than my robots) as a simple insect, say a cockroach. Just imagine poor me, designing a few robots guarding a museum with paintings. Do I need just a security camera on wheels? Nop, that won't work. The robot moves, so first it requires to know the difference between movements caused by itself versus caused by something else (see e.g. Ralf Der on homeokinesis). Then, when for example another robot is moving it should be able to learn that that is "normal". The same with leafs outside (which are only there in the summer) or a door that it has opened from the other side. In other words, over time it requires to incorporate more and more about the world in interaction with that world. It needs to learn to ground its achieved conceptualization in communication with the other robots (see e.g. Steels on language grounding). It should be able to find out itself what is the best communication channel for that, not predefined sound, but could be tactile, etc. There is so, so much.

    I always disapproved from the statistical approach myself. However, contrary to David I slowly seem to get converted. If you see how broadly applicable the Dirichlet Process or the Pitman-Yor process is. I mention only the sticky HDP-HMM for speaker diarization, then this type of statistics is moving into tasks of perception and actuation. It is not about stock markets and search engines anymore. It is about operating in the real world. And if you pay close attention on how these models are subsequently solved, this is by an enormous wide pallet of sampling techniques. These generative approaches are like complex fountains of samples that are able to generate more and more beautiful patterns. These patterns are used in estimation tasks, coaching the organism in the real world. A complex interaction takes place of data flowing back and forth. I see no reason why those generated samples, those generated little quants of information, those sparks of probabilities, would not in the end become something you might call the spikes in your brain.

    My vote is for statisticians that are going out in the world and make robots run. Not for arm-chair philosophers. :-)

  • http://reasonandmeaning.com David Hume

    I completely agree with Haig's perceptive comments. Relatively easy to teach a computer to play chess, hard to get it to catch a ball. The possibilities of strong AI and their relationship to the meaning of life is discussed at: http://reasonandmeaning.com

  • Chris Allen

    I'm a bit late to the party but what the heck- I feel that what Deutsch is talking about is an AGI that, as humans, we would have full mastery over whereas other commentators are reconciled to an AGI that is emergent.

  • http://ubik-intelligence.com/ Ubik Intelligence

    I know, the article is two years old, but this is not much time considered that I bring good news. I read the article as a logic approach to AGI to point out that there is a solution and that it has to fullfill certain criterias. These remarks shall help to search in the right direction. The article manages without knowing the solution to draw true conclusions. Once the solution is found it is very easy to recognize how sound the guesses are or not.
    And as it happens I found the solution to AGi and most of what Deutsch says is the exact true. Yes, it is merely a breakthrough in philosophy and thank you for calling it one of the best ideas ever. I will cite you extensively to the "specialists" of IBM, Darpa and Google.
    http://www.ubik-intelligence.com

  • bobthechef

    The trouble with physicists today is that their philosophical acumen is quite lacking (as bad I'd say as the acumen of philosophers w.r.t. physics). Deutsch runs through a number of assumptions about how things are without taking the time to think carefully through each of them. The reduction "all is physical, therefore the brain is physical, therefore it can be programmed" is facile and deserves very careful consideration. The place of induction and its place in scientific inquiry is sadly misrepresented. I can't get into every oversight without having to respond in a lengthy essay. I'd just like to inform the author and readers interested in taking a serious look at these issues that there's a whole lot on this subject that has received discussion.

    The dominant philosophical schools today have taken a very particular path and this path has had enormous consequences w.r.t. mind, for example. For instance, the problem of qualia is a distinctly Cartesian problem. Something like Aristotelian metaphysics never runs into the issue or many of the other issues plaguing the prevailing metaphysics (interaction problem, soul-body dualism, etc). Even materialism finds its point of departure in Descartes in its denial of the Cartesian soul while maintaining the material body. Some will, of course, presume out of ignorance that Aristotle is outdated and disproved. The cause of this presumption is the false belief that Aristotle's works all form a coherent whole where none may be considered separately and because his physics, for instance, has long been replaced with better physics. It would pay not to be quite so dismissive of him because his ideas on soul which are one with this metaphysics are quite sound (the modern aversion to soul is, again, often to the Cartesian notion where it is a substance distinct from the body). Aristotle would have denied the possibility of computers as things capable of intelligence NOT because they aren't human, but because they lack, for instance, the capacity for abstracting forms from that which is apprehended in the senses and that is because the intellect must be immaterial if intentionality is to work (and again, this isn't some hocus pocus immateriality, but rather that the mind must be able to receive forms without becoming that which the form causes a thing to be). Imagination, on the other hand, occurs in matter and thus at best, computers can engage in image processing. The point here is not specifically to argue for the Aristotelian account, but that it too needs to be considered along with many things (readers may be interested in Feser's "Philosophy of Mind" for a survey of some of the issues touched here). Also, epistemology is not in the business of explaining how knowledge is found but what knowledge is and what it means to know something, etc (this is another clue that Deutsch is way out of his depth). The how here belongs to psychology.

    It's also worth noting that in "Computing Machine and Intelligence", Turing also wrote, in response to the question of whether computers can think: "I believe [it] to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken."

    • Juan Valdez

      well said

  • Jeremiah

    Know this: We are not machines. Our brains are not machines. Stop personifying the brain. We are more than atoms. We have CULTURE. Look at a child and you'll get it. Wittgenstein did.

    Read:
    1. Philosophical Investigations (Wittgenstein)
    2. Wittgenstein, Mind and Meaning: Towards a Social Conception of Mind (Meredith Williams)
    3. Language: The Cultural Tool (Daniel L. Everett)

  • kinkajoo

    So, given that everything we perceive is constructed by our brains, doesn't that mean that whatever artificial intelligence we build and interact with is also a construction of our brain and therefore, as it is perceived and experienced, just an aspect of our own intelligence ?

  • matt__way

    Great article. The only thing I disagree with is the early sentence "But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality." I believe I have the answers to the problems outlined, it has just been a hard road seeking funding due to the exact philosophical issues stated, causing the ideas to go over investors heads.

  • dollarability

    What about Ray Kurzweil's proposal to simulate the neurobiology of the human brain, and forget about epistemology altogether?