Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Google Technology

Why Ray Kurzweil's Google Project May Be Doomed To Fail 354

moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
This discussion has been archived. No new comments can be posted.

Why Ray Kurzweil's Google Project May Be Doomed To Fail

Comments Filter:
  • by Tatarize ( 682683 ) on Monday January 21, 2013 @07:07PM (#42651941) Homepage

    You can draw a distinction between experiencing the world and processing raw information, but how big of a line can you draw when I experience the world through the processing of raw information?

  • by Anonymous Coward on Monday January 21, 2013 @07:27PM (#42652101)

    I've always thought it was about information combined with wants, needs, and fear. Information needs context to be useful experience.

    You need to learn what works and doesn't, in a context, with one or many goals. Babies cry, people scheme (or do loving things), etc. It's all just increasingly complex ways of getting things we need and/or want, or avoiding things we fear or don't like, based on experience.

    I think if you want exceptional problem solving and nuance from an AI, it has to learn from a body of experience. And I wouldn't be surprised if many have said so, long before I did.

  • by JohnWiney ( 656829 ) on Monday January 21, 2013 @07:29PM (#42652115)
    We have always assumed that humans are essentially a very sophisticated and complex version of the most sophisticated technology we know. Once it was mechanical clockwork, later steam engines, electrical motors, etc. Now it is digital logic - put enough of it in a pile, and you'll get consciousness and intelligence. A completely non-disprovable claim, of course, but I doubt that it is any more accurate than previous ideas.
  • Re:Mr. Grandiose (Score:5, Interesting)

    by Iamthecheese ( 1264298 ) on Monday January 21, 2013 @07:39PM (#42652217)
    That "circus magic" showed enough intelligence to parse natural language. I understand you want to believe there's something special about a brain but there really isn't. The laws of physics are universal and apply equally to your brain, a computer, and a rock.

    You should know after all science has created that "we don't know" doesn't mean "it's impossible" nor does it mean "this isn't the right method"
  • Not quite (Score:4, Interesting)

    by ceoyoyo ( 59147 ) on Monday January 21, 2013 @07:53PM (#42652311)

    A technology editor at MIT Technology Review says Kurzweil's approach may be fatally flawed based on a conversation he had with an MIT AI researcher.

    From the brief actual quotes in the article it sounds like the MIT researcher is suggesting Kurzweil's suggestion, in a book he wrote, for building a human level AI might have some issues. My impression is that the MIT researcher is suggesting you can't build an actual human level AI without more cause-and-effect type learning, as opposed to just feeding it stuff you can find on the Internet.

    I think he's probably right... you can't have an AI that knows about things like cause and effect unless you give it that sort of data, which you probably can't get from strip mining the Internet. However, I doubt Google cares.

  • by MichaelSmith ( 789609 ) on Monday January 21, 2013 @08:05PM (#42652395) Homepage Journal

    My wife is putting our son through these horrible cram school things. Kumon and others. I was so glad when he found ways to cheat, now his marks are better, he gets yelled at less and he actually learned something.

  • Re:Ah! (Score:3, Interesting)

    by durrr ( 1316311 ) on Monday January 21, 2013 @08:11PM (#42652441)

    The chinese room is the dumbest fucking thought experiment in the history of the universy. Also, Penrose is a fucking retard when it comes to consciousness.

    Now, having put the abrasive comments aside(without bothering about the critique of the aforementioned atrocities: the internet and googles provides a much better job of the fine details regarding that than any post here will ever make)

    SOooooo, back to the topic at hand: Boris Katz forgets a very important detail: A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time. Now I'm not saying it is guaranteed to work, or to provide any viable resource but I'm saying it's not unfeasible.

    I'm however also not particularly excited about Kurrzweil; he's a good introduction, but the presentation he gives is a bit too shallow and oriented towards laymen(a good method to spread the idea, but a bad one to refine it or get good critique)

  • Re:Bad approach. (Score:2, Interesting)

    by Anonymous Coward on Monday January 21, 2013 @08:11PM (#42652443)

    We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future

    No, we don't know this. Some researchers believe that this might be the case, but it certainly isn't a proven fact. Personally, I think it is a misinterpretation of the data, and that what the fMRI is observing is the process of consciousness.

  • by bmo ( 77928 ) on Monday January 21, 2013 @08:34PM (#42652611)

    Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster.

    Biological neurons on a plate learning faster than neurons inside one's head? They are both biological and work at the same "clock speed" (there isn't a clock speed).

    Besides, we do this every day. It's called making babies.

    The argument that I'm trying to get across is that the evangelists of AI like Kurzweil promote the idea that AI is somehow able to bypass experience, aka "learning by doing" and "common sense." This is tough enough teaching to systems that have been the result of the past 4.5 billion years of MomNature's bioengineering. I'm willing to bet that AI is doomed to fail (to be severely limited compared to the lofty goals of the AI community and the fevered imaginations of the Colossus/Lawnmower Man/Skynet/Matrix fearmongers) and that MomNature has already pointed the way to actual useful intelligence, as flawed as we are.

    Penrose was right, and will continue to be right.

    --
    BMO

  • Re:Ah! (Score:4, Interesting)

    by TheLink ( 130905 ) on Monday January 21, 2013 @10:53PM (#42653391) Journal
    I'd prefer that researchers spend time augmenting humans rather than creating AI especially strong AI. We already have plenty of human and nonhuman entities in this world, we're not doing such a great job with them. Why create AIs? To enslave them?

    There is a subtle but still significant difference between augmenting humans (or animals) and creating new entities.

    There are plenty of things you can do to augment humans:
    - background facial and object recognition
    - artificial eidetic memory
    - easy automatic context-sensitive scheduling of tasks and reminders
    - virtual telepathy and telekinesis ( control could be through gestures or actual thought patterns - brain computer interfaces are improving).
    - maybe even automatic potential collision detection.

    And then there's the military stuff (anti-camouflage, military object recognition, etc).
  • by ridgecritter ( 934252 ) on Monday January 21, 2013 @11:53PM (#42653725)

    This interests me. As a nonexpert in AI, it has always seemed to me that a critical missing aspect of attempts to generate 'strong' AI (which I guess means AI that performs at a human level or better) is a process in which the AI formulates questions, gets feedback from humans (right, wrong, senseless - try again), coupled with modification by the AI of its responses and further feedback from humans...lather, rinse, repeat...until we get responses that pass the Turing test. This is basically just the evolutionary process. This is what made us.

    I don't think we need to know how a mind works to make one. After all, hydrogen and time have led to this forum post, and I doubt the primordial hydrogen atoms were intelligent. So we know that with biochemical systems, it's possible to come up with strong I given enough time and evolution. Since evolution requires only variation, selection, and heritabillity, it's hard for me to believe we can't do that with computational systems. Is it so difficult to write a learning system that assimilates data about the world, asks questions, and changes its assumptions and conclusions on the basis of feedback from humans?

    And it's probably already been tried, and I haven't heard about it. If it has, I'd like to know. If not, I'd like to know why not.

  • Re:Ah! (Score:4, Interesting)

    by david_thornley ( 598059 ) on Tuesday January 22, 2013 @01:51PM (#42658851)

    Searle's Chinese Room paper is basically one big example of begging the question.

    The hypothetical setting is a room with rules for transforming symbols, a person, and lots and lots of scratch paper. Stick a question or something written in Chinese in one window, person goes through the rules and puts Chinese writing out of the other window. Hypothesize that this passes the Turing test with people fluent in Chinese.

    Searle's claim is that the room cannot be said to understand Chinese, since no component can be said to understand Chinese. The correct answer, of course, is that the understanding is emergent behavior. (If it isn't, then Searle is in the rather odd position of claiming that some subatomic particles must understand things, since everything that goes on in my brain is emergent behavior of the assorted quarks and leptons in it.) Heck, later in the paper, he says understanding is biological, and biology is emergent behavior from chemistry and physics.

    He then proposes possible arguments against, and answers each of them by going through topics unrelated to his argument, although relevant to the situation, and finishes with showing that it's equivalent to the Chinese Room, and therefore doesn't have understanding. Yes, this part of the paper is simply begging the question and camouflage. It was hard for me to realize this, given the writing, but once you're looking for it you should see it.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...