Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba 455

An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""
This discussion has been archived. No new comments can be posted.

Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba

Comments Filter:
  • by Anonymous Coward on Sunday November 23, 2014 @09:05PM (#48446397)

    Of course Watson didn't answer questions in Jeopardy. That's not how Jeopardy is played. The contestant ASKS questions, not answers them.

    • by Trogre ( 513942 )

      I thought that too, until I watched a couple of episodes of Jeopardy and realized it's just a regular question/answer game show with "[what|who] is" tossed in front of each answer.

    • Doubly true, we recently stuck a worm's brain in a robot body [i-programmer.info].

      Wowowowow and a worm is way more complicated than an amoeba! Dr. Noe should probably stick to questioning his own existence.

  • by grep -v '.*' * ( 780312 ) on Sunday November 23, 2014 @09:12PM (#48446425)
    Ha! Appropriate /. tagline while reading: Pound for pound, the amoeba is the most vicious animal on earth.
    • Singularity is about as likely as the second coming of Jesus. Put another way, singularity is the nerd's version of Rapture.

      I am a big proponent of self-driving cars. The idea of kicking back and watching a movie during my morning commute is so appealing that I am rooting for Google to go all-out and produce an AI driver ASAP.

      Then a couple weeks ago I read on Slashdot that even after years of hard work, Google car can't even see a red traffic light! [slashdot.org] Machine vision is still so poor as to be nonexistent. Goog

  • by melchoir55 ( 218842 ) on Sunday November 23, 2014 @09:26PM (#48446479)

    I did philosophy myself as an undergraduate, so I don't want to bash our armchair friend here for doing his best. He is making the classic mistake of making claims about fields he isn't part of. In this case biology, computer science, and cognitive science in general (beyond philosophy).

    Regarding the statement "We used 'it' the way we use clocks":
    He is mistaking agency for being something that is an end unto itself. This isn't true. Agents commonly use other agents as tools. The mere property of "being used" doesn't dictate whether something is sentient, intelligent, an agent, or whatever. Yeah, we used Watson to play Jeapordy!, but that doesn't mean it isn't smart. Watson is actually way "smarter" than any human in certain ways.

    This boils down to what you define as intelligence. In humans, intelligence is a very rough term applied to an enormous pile of features. Processing speed, memory, learning algorithms, response time, and many more features all contribute to what we think of as intelligence. A singularity doesn't need to precisely mirror the way in which a human thinks in order to be a singularity. It just needs to be able to adapt and evolve. I'll be the first to admit we are a long way off from modeling a human consciousness in virtual space. However, existing machine learning and rule based techniques are powerful enough to do some really impressive things (like Watson and Siri). They aren't singularity level, no, but that doesn't make this man's arguments relevant.

    Regarding "we can't produce "...machines that exhibit the agency and awareness of an amoeba":
    The idea that an ameoba displays intelligence in excess of our current ability to simulate is frankly a little ridiculous. Artificial agents are capable of very complex behavior. They can react to abstract constructs which are inferred about their environment. They can anticipate opponents based on statistical probability and thereby win, on average, more often than even *a human being*. An amoeba is closer in behavioral complexity to a simple chemical reaction than it is a contemporary artificial intelligence.

    • by JBMcB ( 73720 ) on Sunday November 23, 2014 @09:48PM (#48446571)

      The idea that an ameoba displays intelligence in excess of our current ability to simulate is frankly a little ridiculous.

      That quote bothered me, too. We've been simulating simple insects for decades, back when neural networks were clusters of transistors on flip-chips. We're at the point where we can build machines that can learn to move and navigate on their own. There was a Slashdot article a week ago about a fully mapped nematode neural network wired into a robot.

      • by Prune ( 557140 )
        Your simulation is of purely academic interest if it relies the usual gross oversimplification of the activity of a real neuron. It's only two years ago that we've even attempted a simulation of 100 trillion synapses (comparable to a human brain), in a joint IBM and Lawrence Livermore National Laboratory project. That simulation ran on what was in 2012 the top supercomputer in the world, yet the simulation still ran over 1500 times slower than real-time, and, worse, was still using quite simplified neuron m
    • Re: (Score:3, Interesting)

      by Baloroth ( 2370816 )

      Watson is actually way "smarter" than any human in certain ways.

      That has *always* been true of computers. That is, in fact, exactly why we built computers in the first place: to do things faster than humans can. Saying that something is "smarter than any human in certain ways" is meaningless. Hell, in a way, rocks are smarter than humans. After all, if I throw a rock, it "knows" exactly what path it should take (minimizing energy, interacting with the air, etc.) Sure, it'll be nearly a parabola, but it will be perturbed from a parabola by tiny air currents, minute fluct

      • by Monkius ( 3888 )

        underrated, thanks

      • by Boronx ( 228853 )

        What we have that Watson does not is a survival instinct that's been bred into us over a few billion years.

        This could be programmed in, but there's not much reason to at the moment. NASA looked into self-repairing unmanned bases on the moon and other planets. Such an AI would have a survival instinct and would behave as more than a tool.

    • It's not about making machines smarter than us, it's about making machines that replace us in the workforce.

    • by Kjella ( 173770 )

      And still it boils down to us giving it task and the computer executing it, once it's done it shuts down. What do you do to build an AI that doesn't have any particular purpose, just a general curiosity for life? What do you do to create a sense of self-awareness and an AI that doesn't want to be terminated? Computers are incredible at executing tasks people give it, but it doesn't have a self. It doesn't do anything by itself for itself because it wants to do it. But since we have no clue what makes us tic

      • by bsdasym ( 829112 )
        This is a bias that I can't remember the name of right now, but it boils down to a person not believing that people can create a machine that "truly" thinks/feels because they don't understand what drives those aspects of themselves. The whole argument in the link reduces to the so called "Chinese Room" [wikipedia.org], which itself is just a version of Solipsism that draws the boundary between biology and technology (well actually Chemistry and technology, in Searle's case) rather than between one individual mind and ano
        • by Boronx ( 228853 )

          I think of it as a unconscious vestige of a belief in a soul. Such people can't really see themselves as a collection of cells which are a collection of molecules, even if consciously they affirm that belief. They probably still lay awake at night wondering what happens to us when we're dead.

  • by Eric Smith ( 4379 ) on Sunday November 23, 2014 @09:28PM (#48446495) Homepage Journal

    Wrong. We've produced "...machines that exhibit the agency and awareness of..." a worm: http://www.smithsonianmag.com/... [smithsonianmag.com]

  • by Mistakill ( 965922 ) on Sunday November 23, 2014 @09:29PM (#48446503)
    Im not worried about AI being smarter than us... Im worried about machines that use the same logic (or ethics) we do...

    Such as the Baghdad Airstrike... http://en.wikipedia.org/wiki/J... [wikipedia.org]

    A machine is only as good as the code behind it, and look at the issues with electronic voting machines, ATM's, and chip on credit card's
    • by Mr.CRC ( 2330444 )

      You are on to something here. For ex., a project at IBM aims to model humans' emotional incentive/reward structure: https://www.youtube.com/watch?... [youtube.com]

      This is exactly the WRONG thing to do (well, maybe it's safe as long as it's air gapped from the internet and not made mobile with robotic tool attachments, :) because, well, look at us! We would happily wipe groups of each other off the map if we thought we could get away with it strictly because of our primitive emotional/social/tribal instincts.

      An AI c

  • What exactly is intelligence?

    At one level, it is the ability to process input, digest it, and generate useful output. In that sense, we have created intelligent machines long ago.

    Bug at another level, the level of "awareness" or "consciousness" we aren't even close.

    On one point I agree with the author: machines aren't about to take over the world. People might do awful things with the machines they create, but it is still people who tell the machines what to do.

    • What exactly is intelligence?.

      Computers are useless. They can only give you answers. -Pablo Picasso

  • This is laughable... (Score:4, Interesting)

    by joocemann ( 1273720 ) on Sunday November 23, 2014 @09:33PM (#48446521)

    I find this laughable because it's almost the opposite of the "If we can put a man on the moon, we can solve cancer." fallacy. If we can't copy an amoeba, we won't. LOL. No? I beg to differ. We can't right now, and for a million fundamental reasons that are all being solved in time.

    Here's some perspective. I work in cell biology. 3 years ago, genetic expression required measuring the RNAs of at least a small cluster of cells. Two years ago, single cell RNA analysis became available. A year ago we started seeing the ability to split one cell into 4 equal vessicles, each able to be analyzed separately if need be. We also now have the software and processing power to infer huge bioinformatic hypotheses from this intricate data. In three years the ability went from an average, to a single, to a greater sampling number from the single (for statistical accuracy). THIS IS NOT EVEN THE UPCURVE OF SINGULARITY, but it sure feels like it.

    Nanomaterials are allowing for crazy new properties on the macro-scale. Biotechnology is becoming cellular an surpassing simple chemistry. Artificial intelligence is now being implemented on neural-like computer architectures which are much more powerful at brain-like activity.

    Full Disclosure, I've been a Kurzweilian Singularity Believer for years now and my life is betting on it. But I've had a lot more than confirmation bias going on to keep my confidence very high.

    • But that we are so far from any kind of AI that worrying about what form it might take is stupid. Yes, there are lots of things that might happen in the far future. Until they are closer, worrying about them is silly. There have been stories from people who are all paranoid about AI and think we need to start making with the rules. No we don't, we are so far away we don't even know how far away we are. We also have no idea what form it'll take. May turn out that self awareness is a uniquely biological trait

  • by DamnStupidElf ( 649844 ) <Fingolfin@linuxmail.org> on Sunday November 23, 2014 @09:35PM (#48446531)
    Putting nuclear bombs on the tips of rockets and programming them to hit other parts of the Earth is also mere tool use. Tools are not inherently safe, and never have been. Autonomous tools are even less inherently safe. The most likely outcome of a failed singularity isn't being ruled by robot overlords, it's being dead.
  • by mjm1231 ( 751545 ) on Sunday November 23, 2014 @09:35PM (#48446533)

    http://www.theguardian.com/sci... [theguardian.com]

    I couldn't find a more recent article, but at the end it mentions that this AI came up with a formula for cellular metabolism. It is my understanding that this formula has been tested to be valid, but no human scientists understands what the formula means yet.

  • by JBMcB ( 73720 ) on Sunday November 23, 2014 @09:35PM (#48446541)

    What is the point of this article? You would think that people have learned better by now than to attempt to make predictions as to where technology will go.

  • by invid ( 163714 ) on Sunday November 23, 2014 @09:57PM (#48446605)
    You can make a machine that is many orders of magnitude more intelligent than a human, but unless it has the mechanisms to want something, it won't want anything. Think about what it would take to program a conscious being, just think about how we humans are aware of time and the movement of time--as a software engineer it boggles my mind trying to think about how the brain accomplishes that. Then to program a machine (biological or not) to want something in a fairly consistent way over a period of time under changing circumstances...it takes more than just brute force processing time to accomplish that. We biological machines are aware of ourselves, and we have no idea how we accomplish that. We are going to have to figure that out before we make the singularity machine.
    • This position seems to be popular among people that don't know the first thing about AI. So let me explain the situation from a point of view familiar to AI practitioners: A rational agent is one that acts as if it were maximizing the expected value of some utility function. That, in a nutshell, is the core of making decisions under uncertainty, and the basic recipe for a large part of AI. As part of the design of a utility-centered AI system, you define the utility function, which is precisely how you woul

      • by Mr.CRC ( 2330444 )

        Now think about what happens when that AI can experiment with its own utility function? (Which we have either a very limited, or no ability to do with ourselves.)

        That is the essence of true singularity. For a singularity, the AI must be strong enough to grok its own design, be able to self-modify, and have a system architecture that permits recovery from backup (like tweaking your BIOS on a dual BIOS motherboard) if the next iteration of itself fails.

        Ideally it could run a full simulation of a modified

        • by Prune ( 557140 )
          But our bodies are one of the most essential determinants of the nature of human consciousness: http://en.wikipedia.org/wiki/E... [wikipedia.org]

          This is far more than a philosophical thesis; it's backed up by neuroscience. I highly suggest you read Damasio, who's one of the top neuroscientists in the world. A good overview can be found in his book Self Comes to Mind, the price of it being justified by the selection of paper references in the endnotes alone.
      • by delt0r ( 999393 )
        Well there is also the Chinese room thing. Just because a machine can translate a language, does it know that language? Just because it maximizes some utility function (we don't seem to be doing that at all), does not mean "it" "knows" anything about wants, or what it is doing.
  • by jcohen ( 131471 ) * on Sunday November 23, 2014 @10:09PM (#48446651) Homepage

    The development of Watson stems from employers' inability to use human intelligence 100% instrumentally -- i.e., people can't be used as clocks. Once Watsons are prevalent, humans will be economically superfluous in nearly every area that requires thought. Our overlords won't even bother to bring out the old line about freeing up humans' time to do "better things."

  • welcome our new overlords of our new shiny metal overlords.
  • by RJFerret ( 1279530 ) on Sunday November 23, 2014 @10:34PM (#48446727)

    Meanwhile a week ago nematodes reached the singularity, when folks mapped the roundworms' 300+ synaptic connections into a Lego robot, which proceeded to react to moving toward a wall in similar fashion to biological nematodes.

  • Bright lights, like this loon, are all part of the "man is not ready......." pseudo religious bullshit".

    In fact, we will progress to artificial life and artificial intelligence in erratic steps - some large, some small - some hard, some easy.
    Easy is logic, easy is memory and lookups, easy is speed - hence Watson as we start to climb the connectedness/co-relatedness/content addressable memory ladder. (Content addressable memory {CAI } is like a roll call in the Army - "Private Smith?" - "here"). A lot of the aspects of intelligence are ramifications of CAI, and other aspects of interconnectedness. Add in the speed and memory depth and more and more aspects of an AI emerge. As time goes, step by step, intelligence will emerge. It might be like an infant that needs to learn as we do, but at a far higher speed - zero to 25 years old in 5 minutes???. Experiential memories - can they be done at high speed, or must that clock take longer?.

    The precise timing of these stages elude me, but I believe they will emerge with time.

    As to whether or not this AI will be a malevolent killer, or one of altruistic aspect??? It seems to me that this will depend on how is is brought up.
    (until an AI can reproduce sexually - no he/she). Can a growing AI be abused - mentally, as in children are abused?? I suspect that with no sexuality that there will be no casus abusus. That is not to say that ways to abuse a growing AI are impossible to find - they will emerge in time.

    As these AIs emerge, how smart will they be? and IQ of 25 or one of 25,000,000?? This might bear some relationship with how these AIs treat mankind, as a student at man;s knee, or as something that looks down at man with an IQ of 100 and also sees bees and ants with with a group IQ of 25? and muses - what's the difference and thinks of other things...

    • by mvdwege ( 243851 )

      In fact, we will progress to artificial life and artificial intelligence in erratic steps - some large, some small - some hard, some easy. Yep, got your pseudo-religious bullshit right there. The real Rapture of the Nerds

  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Sunday November 23, 2014 @11:51PM (#48446911) Homepage

    As I note in my doom and gloom YouTube [youtube.com], it's a 50-year-old analogy in the quest for AI that artificial flight did not require duplicating a bird. Artificial intelligence may look very different, and in fact in my video, I avoid defining intelligence and merely point out that "a computer that can program itself" is all that is required for the singularity.

    • Computers have been able to "program themselves" since the first Fortran compiler. We just taught computers how to interpret specifications written at higher and higher levels. Let me know what it'd take for a computer to come up with a program's requirements all by itself, and then we'll know what a singularity needs.
      • A FORTRAN compiler does not run continuously and add additional functionality as it goes along.

        In the debate that followed the opening remarks (video [youtube.com] with very bad audio because the batteries on the lapel microphone ran down), someone suggested that intelligence requires consciousness. I suggested a Linux daemon could be considered conscious: it runs continuously and takes actions based on input and conditions. So my argument is that for the singularity you just need a daemon that continuously adds function

    • "a computer that can program itself"

      is a computer that has been programmed by a human with parameters and a system specifically made by humans for it to take defined variables and combine them in pre-programmed parameters

      all pounded out by a dumb monkey

      'teh singularity' is a tautology

      you can't make a new thing by calling the same thing a different name

    • Here is a link to the 1963 article Artificial Intelligence: Progress and Problems [archive.org]. It refers to the bird analogy as a "trite analogy", which leads me to believe that it predates even this article by many years.
    • by Prune ( 557140 )
      What could be more dangerous than building AI that's smarter than us but cannot relate to us because its intelligence is drastically different from ours? There's no fool-proof way to actually implement "software" constraints in a general super-human intelligence AI (Asimov's robotics laws are about the most unrealistic thing I've ever read in sci fi), and the safety factor falls even further over generations as you get the AI to design an even smarter AI. Physical constraints? Do you think that when an ultr
  • is the constants. If your process doubles in the measured quantity in 20 days then you have something that might be worth worrying about (assuming that it won't hit some other limit, so long as that limit isn't you), but if it doubles in 20 years you have some time to consider and prepare. Whenever I see talk about the singularity it seems like the growth people are talking about either has a very short doubling period (which it probably doesn't) or the growth is actually super-exponential (the doubling per

  • The level of intelligence of amoebas is hard to reach: they have to survive in the real world.

    AI has therefore set its standards considerably lower: matching the level of intelligence of Berkeley philosophers and social scientists. Here is an example, indistinguishable from its human counterparts:

    http://www.elsewhere.org/journ... [elsewhere.org]

  • we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is.

    so glad to see published articles where they say this plainly

    'teh singluarity' needs to go to the dustbin of history b/c it's wasting *billions* of research dollars

  • The "AI" these days is just a collection of IF/THEN statements.

  • And AI is to biological intelligence what airplanes are to birds.
  • Mental masturbation wherein meaningless questions are poorly answered.

  • "Asking whether a computer can think is like asking whether a submarine can swim."

    --Djikstra

  • "Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything."

    A ridiculous analogy. It's like saying the dog that fetches the wood to its master has no intelligence at all. It's the master that "fetched" the wood. AI is not pseudo-intelligence

  • What she asks of machines they cannot have, as it comes from birth. She's looking at the wrong place. the furthers step isn't Watson, but UHFT.

    An amoeba's interaction with its environment comes from the fact that it's a product of that environment. AI are not a product of their environment, they are artificial. What Alva Noe asks of an AI could only be answered by one that appears spontaneously from its environment.

    However, the environment we've created in which AI could appear is way too simple to allow su

  • The professor says that clocks keep time, but they don't know what time is. I found this particular choice really ironic. First, I keep pretty good time. If you ask me what time it is, any time of the day, I can tell you within 10 minutes or so, regardless of the last time I looked at the clock. I regularly wake up a minute or two before my alarm. When I don't use an alarm, I can tell the time within a few minutes when I open my eyes. On the other hand, I think that most humans don't understand
  • There is a distinct difference between Intelligence and Consciousness.
    I have no doubt that the AI we are building will improve dramatically, even to the point where it will far exceed human intelligence. But it is unlikely that the intelligent machines to ever be sentient.
  • And he calls himself a professor? I guess he's way off base..
    Smarter as us doesn't mean it has to understand what it's talking about (most politicians don't know jack what they are talking about and still they 'make' the rules)..
    And let's not forget, with neural networks, AI can advance much faster than a regular person, and also let's not forget, there are way more advanced projects going on in laboratories than IBM's Watson, which haven't been shown to the public.. IBM's Watson is just the tip of the iceb

  • I don't think the technological singularity - if there's such a thing - should be feared. You may, however, want to fear widespread pseudo/artificial/whatever intelligence. Or just call it plain automation. Because it's going to take your job well before there's a technological singularity. And the challenges that need to be overcome to get us there are much easier than copying an amoebe. You don't need to be able to copy an amoebe in order to be able to do just about anything a human does better than a hum

Remember to say hello to your bank teller.

Working...