Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Technology

Why Google Hired Ray Kurzweil 117

An anonymous reader writes "Nataly Kelly writes in the Huffington Post about Google's strategy of hiring Ray Kurzweil and how the company likely intends to use language translation to revolutionize the way we share information. From the article: 'Google Translate is not just a tool that enables people on the web to translate information. It's a strategic tool for Google itself. The implications of this are vast and go beyond mere language translation. One implication might be a technology that can translate from one generation to another. Or how about one that slows down your speech or turns up the volume for an elderly person with hearing loss? That enables a stroke victim to use the clarity of speech he had previously? That can pronounce using your favorite accent? That can convert academic jargon to local slang? It's transformative. In this system, information can walk into one checkpoint as the raucous chant of a 22-year-old American football player and walk out as the quiet whisper of a 78-year-old Albanian grandmother.'"
This discussion has been archived. No new comments can be posted.

Why Google Hired Ray Kurzweil

Comments Filter:
  • Awesome post (Score:3, Insightful)

    by iliketrash ( 624051 ) on Thursday December 20, 2012 @08:23PM (#42354285)

    OK--this is probably the stupidest and worst-informed /. post I have ever seen.

  • Ridiculous (Score:1, Insightful)

    by Anonymous Coward on Thursday December 20, 2012 @08:25PM (#42354313)

    Ridiculous visions and promises that are certain not to see the light of day. :-)

  • by Anonymous Coward on Thursday December 20, 2012 @09:12PM (#42354811)

    That's great, but I wish they'd somehow find it in their hearts to turn back on exact word matching, even if it's obscurely hidden.

  • Re:Ridiculous (Score:2, Insightful)

    by LordLucless ( 582312 ) on Thursday December 20, 2012 @09:25PM (#42354907)

    Sorta like men walking on the moon. Reach, grasp, exceeding, etc.

  • by raddan ( 519638 ) * on Thursday December 20, 2012 @10:10PM (#42355321)
    Yeah, but there's a reason why statistical models are hot now and why the old AI-style of logical reasoning isn't: the AI stuff only works when the input is perfect, or at least, planned for. As we all know, language doesn't really have rules, just conventions. This is why the ML approach to NLP is powerful: the machine works out what was probably meant. That's far more useful, because practically nobody writes well. When Abdur Chowdhury was still Twitter's main NLP guy, he visited our department, and guess what-- people even write in more than one language in a single sentence! Not to mention, in the old AI-style approach, if you fill a big box full of rules, you have to search through them. Computational complexity is a major limiting factor in all AI problems. ML has this nice property that you can often simply trade accuracy for speed. See Monte Carlo methods [wikipedia.org].

    As you point out, ML doesn't "understand" anything. I personally think "understanding" is a bit of a squishy term. Those old AI-style systems were essentially fancy search algorithms with a large set of states and transition rules. Is that "understanding"? ML is basically the same idea except that transitioning from one state to another involves the calculation of a probability distribution, and sometimes whether the machine should transition is probabilistic.

    I think that hybrid ML/AI systems-- i.e., systems that combine both logical constraints and probabilistic reasoning-- will prove to be very powerful in the future. But does that mean these machines "understand"? If you mean something like what happens in the human brain, I'm not so sure. Do humans "understand"? Or are we also automata? In order to determine whether we've "cracked AI", we need to know the answers to those questions. See Kant [wikipedia.org] and good luck.
  • Re:Awesome post (Score:4, Insightful)

    by TapeCutter ( 624760 ) on Thursday December 20, 2012 @11:40PM (#42355951) Journal
    I would say that Keller already "knew she was", she just didn't have the mental tools to describe it to herself or others, the "internal dialogue" that gives us an ever present narrative in a modern human's mind is impossible without language. If you get into a highly emotional state (such as rage or terror), the narrative is silenced and the senses are more acute, reflexive responses take over, adrenaline pumps through you, pain is suppressed. A champion boxer wins because he is in control of his emotions, if he loses that control for an instant his opponent may very well lose an ear.

    What astounds me is the mild interest in IBM's Jeopardy winning computer, to me it's comparable to the moon landing (which I witnessed), when you question the unimpressed it's clear they don't understand the difficulty of the problem or the significance of the win. Sure the game of Jeopardy is a restricted domain, but it's far broader than what's needed for a search engine that is "smart" enough to "understand" it's user and ask pertinent follow up questions. However that's not where I see the biggest impact on society, the most significant impact will come from widely available and "cheap" expert systems that use this technology, an "academic in a box" that professionals can kick under their desk and consult at will (much like software developers use google as their default documentation but with far less frustrations and dead ends). We already have machines that can organise and rummage through the world's knowledge far better than humans can do with a manual system, for instance software developers such as myself are constantly referring to google for advise on esoteric questions.

    What we are starting to see are machines that can make sense of that pile of factoids significantly better than humans can, machines that understand natural language (or at worst the subset that is human text), they can relate facts, discover new patterns, create and test novel hypothesis to discover new facts within existing data. Sure it takes 20 tons of air-conditioning alone for a "computer" to beat the speed and accuracy of the small blob of jelly inside the head of a Jeopardy champion but the basic "AI"* problem has been well and truly cracked over the last decade, squeezing it into an iphone or scaling it up to a totalitarian demigod is now an engineering problem.

    AI* - as opposed to what is known as the "hard problem of consciousness". The kind of AI that would pass the basic idea of a Turing test for the majority of people, you can claim that such a machine is "intelligent" or argue against it, in a pragmatic sense it's irreverent since there is no agreed definition of "intelligence". Attributes such as intelligence and understanding are applied to computers because we don't have any other words that describe their behavior. Listen to any developer explaining a bug, you will hear expressions such as "it thinks X", "it wants Y", these are universal metaphors for discussing computers, not a description of reality, it's how humans communicate about the behavior of ALL objects (particularly animated ones) and is intimately related to mankind's highly evolved (and innate), "theory of mind".
  • by VortexCortex ( 1117377 ) <VortexCortex@pro ... m minus language> on Friday December 21, 2012 @02:52AM (#42356965)

    1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
    2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

    Then you have AI (machine intelligence) researchers like myself who realize that the world isn't persistent, perfect or consistent, and neither must intelligent systems be. It's plainly obvious that any sufficiently complex cybernetic (feedback loop) system is indistinguishable from sentience because that's what sentience is (your mind is merely a sentient cybernetic system). I have but to look at the hierarchical neural networks and structures of the human mind to realize it's only a matter of time before the artificial system complexity eclipses our own minds'. The true flaw is top-down thinking. That's not the way complex life was made, that's not the way we achieved sentience, that's not the way to cause it to happen artificially either... It's the bottom up approach that works. You can't design sentient intelligences outright, but you can create self organizing systems that have the capacity to acquire more complexity, and evolve more intelligence. Not all machine intelligence systems have an end to the training process -- These don't fit into your bullshit 1) and 2) classifications.

    Also: The level of intelligence that emerges from any complex system is not artificial, it is real intelligence; That the medium is artificial is not important in terms of intelligence. I think "Artificial Intelligence" is a racist term used by chauvinists that think human intellect is far more special than it really is.

    Want to see something funny? Ask an AI researcher if they believe in Intelligent Design. If they say "Yes" then say, "So you think yourself a god?" If they say, "No" then say, "What do you call yourself doing then?". Those working in emergent intelligence will happily reply that they're modeling the same processes that we already know work in nature, the others will be in quite a state!

For God's sake, stop researching for a while and begin to think!

Working...