Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Google Technology

Why Ray Kurzweil's Google Project May Be Doomed To Fail 354

moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
This discussion has been archived. No new comments can be posted.

Why Ray Kurzweil's Google Project May Be Doomed To Fail

Comments Filter:
  • by dmomo ( 256005 ) on Monday January 21, 2013 @07:08PM (#42651949)

    It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.

  • Mr. Grandiose (Score:3, Insightful)

    by Anonymous Coward on Monday January 21, 2013 @07:17PM (#42652015)

    Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.

  • loops (Score:3, Insightful)

    by perceptual.cyclotron ( 2561509 ) on Monday January 21, 2013 @07:23PM (#42652065)
    The data vs IRL angle isn't in and of itself an important distinction, but an entirely valid concern that is likely to fall out of this distinction (though needn't be a necessary coupling) is that the brain works and learns in an environment where sensory information is used to predict the outcomes of actions - which themselves modify the world being sensed. Further, much of sensation is directly dependent on, and modified by, motor actions. Passive learners, DBMs, and what have you are certainly able to extract latent structure from data streams, but it would be inadvisable to consider the brain in the same framework. Action is fundamental to what the brain does. If you're going to borrow the architecture, you'd do well to mirror the context.
  • by bmo ( 77928 ) on Monday January 21, 2013 @07:28PM (#42652107)

    AI itself is fundamentally flawed.

    AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.

    I'm sure everyone here can either identify this or identify with it.

    --
    BMO

  • Re:Ah! (Score:5, Insightful)

    by Jherico ( 39763 ) <bdavis@saintandrea[ ]rg ['s.o' in gap]> on Monday January 21, 2013 @07:30PM (#42652127) Homepage
    I hope Kurzweil succeeds simply so that we can assign the resulting AI the task of arguing with these critics about whether it's experience of consciousness is any more or less valid than theirs. It probably won't shut them up, but it might allow the rest of us to get some real work done.
  • by astralagos ( 740055 ) on Monday January 21, 2013 @07:32PM (#42652155)
    There's a lather/rinse/repeat model with AI publication. I encountered it in configuration (systems designed to build systems), and it goes like this: 1. We've built a system that can make widgets out of a small set of parts, now we will build a system that can generally build artifacts! 2. (2-3 years later). We're building an ontology of parts! It turns out to be a bit more challenging! 3. (5-7 years later). Ontologies of parts turn out to be really hard! We've built a system that builds other widgets out of a small set of -different- parts! The models of thought in AI (and to a lesser extent cog psych) are still caught up in this very algorithmic rule-based world that can be traced almost lineally from Aristotle and without really much examination of how our thinking process actually works. The problem is that whenever we try to take these simple models and expand them out of a tiny field, they explode in complexity.
  • by Ralph Spoilsport ( 673134 ) on Monday January 21, 2013 @07:33PM (#42652159) Journal
    Because Kurzweil's a freakin' lunatic snakeoil salesman? I dunno - just guessin'.
  • by russotto ( 537200 ) on Monday January 21, 2013 @07:34PM (#42652169) Journal

    "Always listen to experts. They'll tell you what can't be done and why. Then do it" (from the Notebooks of Lazarus Long)

  • Ah, naysayers... (Score:5, Insightful)

    by Dr. Spork ( 142693 ) on Monday January 21, 2013 @07:34PM (#42652171)
    What happened to the spirit of "shut up and build it"? Google is offering him resources, support, and data to mine. We have to just admit that we don't know enough to predict exactly what this kind of thing will be able to do. I can bet it will disappoint us in some ways and impress us in others. If it works according to Kurzweil's expectations, it will be a huge win for Google. If not, they will allocate all that computing power to other uses and call it a lesson learned. They have enough wisdom to allocate resources to projects with a high chance of failure. This might be one of them, but that's a good sign for Google.
  • by rubycodez ( 864176 ) on Monday January 21, 2013 @07:34PM (#42652175)

    that passes for intelligence in college, so what's the problem? most people on the street don't even have "book smarts", they're dumber than a sack of shit

  • Bad approach. (Score:1, Insightful)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday January 21, 2013 @07:37PM (#42652201) Homepage Journal

    Both of them.

    The human brain doesn't "store" information at all (and thus never processes it). There are four parts to the brain there's the DNA (which is unique to each cell, according to some researchers), there's proteins attached to each connection (nobody knows what they do, but they seem to be involved in carrying state information between one generation of synapse and another), there's the synapses themselves (the connectome) and there's the weighting given to each synapse (the conversion between electrical and chemical signals isn't fixed, it varies between each synapse and between different sorts of signal)

    None of this involves sensory data, memories, etc. None of that exists anywhere in this system. Memories are synthesized at the time of recall from the meta-data in the brain, but there is nothing in the brain you can point to and call it a memory. Everything is synthesized at time of use and then disposed of. (This is why you can create false memories so easily and why the senses are so easily fooled.)

    The brain does not process the senses, either. Nor are the senses distinct - they bleed into each other. The brain is then given a virtual model with all the gaps filled in with generated data. This VR has properties the real world does NOT have, such as simplifications, which enables the brain to actually do something with it. Raw data would be too noisy and too much in flux.

    This system creates the illusion of intelligence. We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.

    On the other hand, if you want to mimic humans, you need the whole system. One component will give you as much thought as an egg will give you cake. Follow the recipe if you want cake, isolated components will give you nothing useful.

    This is all obvious stuff. I can only assume that Google's inferior logic was therefore produced by a computer.

  • Re:experience (Score:4, Insightful)

    by Zeromous ( 668365 ) on Monday January 21, 2013 @07:38PM (#42652209) Homepage

    So what you are saying is the computer, like humans, will be boxed in by their own perception?

    How is this metaphysically different from what we *do* know about our own intelligence?

  • by bmo ( 77928 ) on Monday January 21, 2013 @07:55PM (#42652341)

    that passes for intelligence in college, so what's the problem?

    That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).

    A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."

    The apprentice phase is where one picks up the "common sense" for a trade.

    As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.

    Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.

    http://blog.ted.com/2009/03/05/mike_rowe_ted/ [ted.com]

    In short, your follow-up sentence says that you are an elitist prick who probably would be entirely lost without the rest of the "lower" part of society picking up after you.

    --
    BMO

  • Re:experience (Score:5, Insightful)

    by medv4380 ( 1604309 ) on Monday January 21, 2013 @07:58PM (#42652361)
    Yes, and actual Intelligent Machine would be boxed in by its own perceptions. Our reality is shaped by our experience though our senses. Lets say, for the sake of argument, that Watson is actually a Machine Intelligence/Strong AI, but the actual problem with it communicating with us is linked to its "Reality". When the Urban dictionary was put into it all it did was start swearing, and using curses incorrectly. What if that was just it having a complete lack of context for our reality. Its reality is just words and definitions after all. To it the Shadows on the wall is literally books and text based information. It cant move and experience the world in the way that we do. The problem of communication becomes a metaphysical one based in how each intelligence perceives reality. We get away with it because we assume that everyone has the same reality as context, but a machine AI does not necessarily have this same context to build communication off of.
  • Re:Bad approach. (Score:5, Insightful)

    by Tablizer ( 95088 ) on Monday January 21, 2013 @08:04PM (#42652383) Journal

    there is nothing in the brain you can point to and call it a memory.

    Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)

    As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).

  • by ceoyoyo ( 59147 ) on Monday January 21, 2013 @08:07PM (#42652415)

    No, it doesn't.

    One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.

    The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.

  • Re:Ah! (Score:4, Insightful)

    by Jeremiah Cornelius ( 137 ) on Monday January 21, 2013 @08:13PM (#42652449) Homepage Journal

    Yeah, just keep arguing your way into a semantic web... :-)

  • by PraiseBob ( 1923958 ) on Monday January 21, 2013 @08:21PM (#42652509)
    He has some unusual ideas about the future. He is also one of the most successful inventors of the past century, and like it not is often ranked alongside Edison and Tesla in terms of prolific ideas and inventions. One of the other highly successful inventors of the past century is Kamen, and he just invented a machine which automatically pukes for people. So... maybe your bar is set a little high.
  • Re:Ah! (Score:3, Insightful)

    by Goaway ( 82658 ) on Monday January 21, 2013 @08:47PM (#42652673) Homepage

    A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time.

    How?

  • Re:Bad approach. (Score:5, Insightful)

    by Omestes ( 471991 ) <omestes@gmail . c om> on Monday January 21, 2013 @09:45PM (#42653005) Homepage Journal

    The human brain doesn't "store" information at all (and thus never processes it).

    This sounds like mere semantics to me. Yes, there isn't a little television screen playing that one time when you broke your arm, with a post-it note attatched saying "memory #4 April, 3, 1956". But there is a deeply encoded structure of chemical potentials, and neural connections which represents this memory. It is stored, and it is, obviously, processed. If it wasn't so, then how could this memory be subject to action and further processing?

    Yes, it isn't stored like a video file is stored on your computer, or a photo in your album; but this doesn't mean it isn't stored. If it is an object of thought, it is in the brain, and if it is re-callable, it is stored.

    We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.

    Huh? I'm not going to get into the agency (free will) debate... But if it did exist, I don't think our understanding of the brain is really up to snuff enough to allow some fMRIs to show it. If it does exist (again, I'm not getting into it), I doubt very much that it will be a little glowing ball located in the middle of your brain (again with a post-it saying "free will"), it would be live pretty much everything else, distributed across large areas of the brain, and sharing functions with other processes of the brain (like memory, limbic functions, sensory processing, etc...).

    This system creates the illusion of intelligence.

    This sort of statement is why I generally laugh at the whole field of cogsci and AI. Look up p-zombies. At what point is an illusion not, and if you can't actually tell the difference with any test, how can you ever saying, meaningfully, that it IS actually a mere illusion? I make an AI, a very strong AI, and it acts exactly like a human. 100% indistinguishable from a human mind, to an outside observer. Is this an illusion? How do you find out? Given a Turing test like environment, where you can't judge on surface features, how could you ever tell? Ask it, and it will say it is intelligent (just like you or me), input stimulous, and you get the same output you or me would give.

    At this point illusion becomes a meaningless statement, since it is completely unprovable.

    I'm not a fan of Strong AI, and doubt it is possible, but these arguments have been pretty much beaten into the ground by now. I hate to say it, but with intelligence all that matters in inputs and output, the rest is a black box. This also ignores the fact that intelligence is a dumb term, completely meaningless when applied to anything non-human. In this case, by using "intelligence" we only mean "human-like", which pretty much means it gives an expected output to a given input.

"Engineering without management is art." -- Jeff Johnson

Working...