Why Ray Kurzweil's Google Project May Be Doomed To Fail 354
moon_unit2 writes "An AI researcher at MIT suggests that Ray Kurzweil's ambitious plan to build a super-smart personal assistant at Google may be fundamentally flawed. Kurzweil's idea, as put forward in his book How to Build a Mind, is to combine a simple model of the brain with enormous computing power and vast amounts of data, to construct a much more sophisticated AI. Boris Katz, who works on machines designed to understand language, says this misses a key facet of human intelligence: that it is built on a lifetime of experiencing the world rather than simply processing raw information."
You have to start somewhere. (Score:5, Insightful)
It won't be perfect, but "fundamentally flawed" seems like an over statement to me. A personal AI assistant will be useful for somethings, but not everything. What it will be good at won't necessarily be clear until it's put into use. Then, any shortcomings can still be improved, even if certain tasks must be more or less hard-wired into its bag of tricks. It will be just as interesting to know what it absolutely won't be useful for.
Mr. Grandiose (Score:3, Insightful)
Kurzweil is delusional. Apple's Siri, Google Now and Watson are just scaled-up versions of Eliza. Circus magic disguised as Artificial Intelligence is just artifice.
loops (Score:3, Insightful)
Re:You have to start somewhere. (Score:2, Insightful)
AI itself is fundamentally flawed.
AI assumes that you can take published facts, dump them in a black box, and assume that the output is going to be intelligent. Sorry, but when you do this to actual humans, you get what is called "book smart" without common sense.
I'm sure everyone here can either identify this or identify with it.
--
BMO
Re:Ah! (Score:5, Insightful)
We've been down THIS road enough (Score:3, Insightful)
Why Ray Kurzweil's Google Project May Be Doomed? (Score:2, Insightful)
A Heinlein quote comes to mind (Score:5, Insightful)
"Always listen to experts. They'll tell you what can't be done and why. Then do it" (from the Notebooks of Lazarus Long)
Ah, naysayers... (Score:5, Insightful)
Re:You have to start somewhere. (Score:2, Insightful)
that passes for intelligence in college, so what's the problem? most people on the street don't even have "book smarts", they're dumber than a sack of shit
Bad approach. (Score:1, Insightful)
Both of them.
The human brain doesn't "store" information at all (and thus never processes it). There are four parts to the brain there's the DNA (which is unique to each cell, according to some researchers), there's proteins attached to each connection (nobody knows what they do, but they seem to be involved in carrying state information between one generation of synapse and another), there's the synapses themselves (the connectome) and there's the weighting given to each synapse (the conversion between electrical and chemical signals isn't fixed, it varies between each synapse and between different sorts of signal)
None of this involves sensory data, memories, etc. None of that exists anywhere in this system. Memories are synthesized at the time of recall from the meta-data in the brain, but there is nothing in the brain you can point to and call it a memory. Everything is synthesized at time of use and then disposed of. (This is why you can create false memories so easily and why the senses are so easily fooled.)
The brain does not process the senses, either. Nor are the senses distinct - they bleed into each other. The brain is then given a virtual model with all the gaps filled in with generated data. This VR has properties the real world does NOT have, such as simplifications, which enables the brain to actually do something with it. Raw data would be too noisy and too much in flux.
This system creates the illusion of intelligence. We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.
On the other hand, if you want to mimic humans, you need the whole system. One component will give you as much thought as an egg will give you cake. Follow the recipe if you want cake, isolated components will give you nothing useful.
This is all obvious stuff. I can only assume that Google's inferior logic was therefore produced by a computer.
Re:experience (Score:4, Insightful)
So what you are saying is the computer, like humans, will be boxed in by their own perception?
How is this metaphysically different from what we *do* know about our own intelligence?
Re:You have to start somewhere. (Score:5, Insightful)
that passes for intelligence in college, so what's the problem?
That's the *only* place it passes for intelligence. And that only works for 4 years. It doesn't work for grad level. (If it's working for you at grad level, find a different institution, because you're in one that sucks).
A lot of knowledge is not published at all. It's transmitted orally. It's also "discovered" by the user of facts through practice as to where certain facts are appropriate and where not appropriate. If you could use just books to learn a trade, we wouldn't need apprenticeships. But we still do. We even attach a fancy word to apprenticeships for so-called "white collar" jobs and call them "internships."
The apprentice phase is where one picks up the "common sense" for a trade.
As for the rest of your message, it's a load of twaddle, and I'm sure that Mike Rowe's argument for the "common man" is much more informed than your flame.
Please note where he talks about what so-called "book learned" (the SPCA) say about what you should do to neuter sheep as opposed to what the "street smart" farmer does and Mike's own direct experience. That's only *one* example.
http://blog.ted.com/2009/03/05/mike_rowe_ted/ [ted.com]
In short, your follow-up sentence says that you are an elitist prick who probably would be entirely lost without the rest of the "lower" part of society picking up after you.
--
BMO
Re:experience (Score:5, Insightful)
Re:Bad approach. (Score:5, Insightful)
Hogwash! The weightings you talked about are the memories. They may not be easily recognized as a coherent memory (or part of) by a casual observer, but that's not the same as not being a "memory". You are confusing observer recognition with existence. Confusion does not end existence (except for stunt-drivers :-)
As far as whether following the brain's exact model is the only road to AI, well it's too early to say. We tried to get flight by building wings that flap to mirror nature, but eventually found other ways (propellers and jets).
Re:You have to start somewhere. (Score:5, Insightful)
No, it doesn't.
One particular kind of AI, which was largely abandoned in the 60's assumes that. Modern AI involves having some system, which ranges from statistical learning algorithms all the way to biological neurons growing on a plate, learn through presentation of input. The same way people learn, except often faster. AI systems can be taught in all kinds of different ways, including dumping information into them, a la Watson; by letting them interact with an environment, either real or simulated; or by having them watch a human demonstrate something, such as driving a car.
The objection here seems to be that Google isn't going to end up with a synthetic human brain because of the type of data they're planning on giving their system. It won't know how to throw a baseball because it's never thrown a baseball before. (A) I doubt Google cares if their AI knows things like throwing baseballs, and (B) it says very little generally about limits on the capabilities of modern approaches to AI.
Re:Ah! (Score:4, Insightful)
Yeah, just keep arguing your way into a semantic web... :-)
Re:Why Ray Kurzweil's Google Project May Be Doomed (Score:2, Insightful)
Re:Ah! (Score:3, Insightful)
A lifetime of experience to a computer cluster with several thousand cores, and several billion Hz of operational frequency, per core, can be passed in a very short time.
How?
Re:Bad approach. (Score:5, Insightful)
The human brain doesn't "store" information at all (and thus never processes it).
This sounds like mere semantics to me. Yes, there isn't a little television screen playing that one time when you broke your arm, with a post-it note attatched saying "memory #4 April, 3, 1956". But there is a deeply encoded structure of chemical potentials, and neural connections which represents this memory. It is stored, and it is, obviously, processed. If it wasn't so, then how could this memory be subject to action and further processing?
Yes, it isn't stored like a video file is stored on your computer, or a photo in your album; but this doesn't mean it isn't stored. If it is an object of thought, it is in the brain, and if it is re-callable, it is stored.
We know from fMRI that "free will" does not exist and that "thoughts" are the brain's mechanism for justifying past actions whilst modifying the logic to reduce errors in future - a variant on back-propagation. Real-time intelligence (thinking before acting) doesn't exist in humans or any other known creature, so you won't build it by mimicking humans.
Huh? I'm not going to get into the agency (free will) debate... But if it did exist, I don't think our understanding of the brain is really up to snuff enough to allow some fMRIs to show it. If it does exist (again, I'm not getting into it), I doubt very much that it will be a little glowing ball located in the middle of your brain (again with a post-it saying "free will"), it would be live pretty much everything else, distributed across large areas of the brain, and sharing functions with other processes of the brain (like memory, limbic functions, sensory processing, etc...).
This system creates the illusion of intelligence.
This sort of statement is why I generally laugh at the whole field of cogsci and AI. Look up p-zombies. At what point is an illusion not, and if you can't actually tell the difference with any test, how can you ever saying, meaningfully, that it IS actually a mere illusion? I make an AI, a very strong AI, and it acts exactly like a human. 100% indistinguishable from a human mind, to an outside observer. Is this an illusion? How do you find out? Given a Turing test like environment, where you can't judge on surface features, how could you ever tell? Ask it, and it will say it is intelligent (just like you or me), input stimulous, and you get the same output you or me would give.
At this point illusion becomes a meaningless statement, since it is completely unprovable.
I'm not a fan of Strong AI, and doubt it is possible, but these arguments have been pretty much beaten into the ground by now. I hate to say it, but with intelligence all that matters in inputs and output, the rest is a black box. This also ignores the fact that intelligence is a dumb term, completely meaningless when applied to anything non-human. In this case, by using "intelligence" we only mean "human-like", which pretty much means it gives an expected output to a given input.