Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google Technology

Why Google Hired Ray Kurzweil 117

An anonymous reader writes "Nataly Kelly writes in the Huffington Post about Google's strategy of hiring Ray Kurzweil and how the company likely intends to use language translation to revolutionize the way we share information. From the article: 'Google Translate is not just a tool that enables people on the web to translate information. It's a strategic tool for Google itself. The implications of this are vast and go beyond mere language translation. One implication might be a technology that can translate from one generation to another. Or how about one that slows down your speech or turns up the volume for an elderly person with hearing loss? That enables a stroke victim to use the clarity of speech he had previously? That can pronounce using your favorite accent? That can convert academic jargon to local slang? It's transformative. In this system, information can walk into one checkpoint as the raucous chant of a 22-year-old American football player and walk out as the quiet whisper of a 78-year-old Albanian grandmother.'"
This discussion has been archived. No new comments can be posted.

Why Google Hired Ray Kurzweil

Comments Filter:
  • by MisterSquid ( 231834 ) on Thursday December 20, 2012 @08:27PM (#42354337)

    That can convert academic jargon to local slang? It's transformative.

    That right there is going to be one hell of a translation. Presuming all statements from one language can be translated into statements in a different language assumes (or seems to assume) that languages are isomorphic.

    However, there are things that cannot be communicated in the limited vocabulary available to, say, a young adult compared to the expansive vocabulary of, say, a scholar of comparative literature. The same applies for concepts that can only be delivered in medical specialized terminology (disparagingly referred to as "jargon") an that cannot be communicated in layperson language.

    None of which is to say that some ideas (even very important ideas) cannot be translated across linguistic groups, but the idea that Google and Kurzweil are somehow going to produce the Internet equivalent of a Babel Fish is nothing more than a wish.

  • Re:Awesome post (Score:5, Interesting)

    by Genda ( 560240 ) <mariet@go[ ]et ['t.n' in gap]> on Thursday December 20, 2012 @09:04PM (#42354719) Journal

    You need to investigate the entire initiative Google is spearheading around its acquisition of Metaweb. They are building an ontology for human knowledge, and are ultimately building the semantic networks necessary for creating an inference system capable of human level contextual communication. The old story about the sad state of computers' contextual capacity, recounts the story of the computer that translates the phrase "The spirit is willing, but the flesh is weak." from English to Russian and back and what they got was "The wine is good but the meat is rotten."

    The new system won't have this problem. Because it will instantly know about the reference coming from the Bible. I will also know all the literary links to the phrase, the importance of its use in critical historical conversations, The work of the Saints, the despair of martyrs, in short an entire universe of context will spill out about the phrase and as it takes the conversational lead provided by the enquirer it will dance to deliver the most concise and cogent responses possible. In the same way, It will be able to apprehend the relationship between a core communication given in context 'A' and translate that conversation to context 'B' in a meaningful way.

    Ray is a genius for boiling complex problems down into tractable solution sets. Combine Ray's genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware. There's a another famous story about Helen Keller, before she had language. symbolic reference, she lived like an animal. Literally a bundle of emotions and instincts. One moment, one utterly earth shattering moment there was nothing, then Annie Sullivan her teacher placed her hand in a stream of cold water and signed water in her palm. Ellen understood... water. In the next moment Ellen was born as a distinct and conscious being, she learned that she had a name, that she was. I don't know what that moment will look like for machines, I just know its coming sooner than we think. I also can't be certain whether it will be humanities greatest achievement or our worst mistake. That awaits seeing.

  • by slacka ( 713188 ) on Thursday December 20, 2012 @09:31PM (#42354985)

    This is a great move for Google's AI research, since their current Director of Research,Peter Norvig, comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While these techniques have their use in specific areas, they will never lead us to a general purpose strong AI.

    Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
    Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "We need to meet up" to Chinese. Google translates it to (can't type Chinese in Slashdot?)
    , meaning "We need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.

    Leaders in AI like Kurzweil and Hawkins are going to finally crack the AI problem. With Kurzweil's experience and Google's resources, it might happen a lot sooner than you all expect.

    [1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-for-the-future-of-ai [tor.com]

  • by qbitslayer ( 2567421 ) on Thursday December 20, 2012 @10:05PM (#42355285)

    The problem with people like Kurzweil, Jeff Hawkins, the folks at the Singularity Institute and the rest of the AI community is that they have all jumped on the Bayesian bandwagon. This is not unlike the way they all jumped on the symbolic bandwagon in the last century only to be proven wrong forty years later. Do we have another half a century to waste, waiting for these guys to realize the error of their ways? Essentially there are two approaches to machine learning.

    1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
    2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

    Luckily for the rest of humanity, a few people are beginning to realize the folly of the Bayesian mindset. When asked in a recent Cambridge Press interview [cambridge.org], "What was the greatest challenge you have encountered in your research?", Judea Pearl [wikipedia.org], an Israeli computer scientist and an early champion of the Bayesian approach to AI, replied: "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own."

    Read The Myth of the Bayesian Brain [blogspot.com] for more, if you're interested.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...