Forgot your password?
typodupeerror
Google Technology

Why Google Hired Ray Kurzweil 117

Posted by samzenpus
from the getting-the-scoop dept.
An anonymous reader writes "Nataly Kelly writes in the Huffington Post about Google's strategy of hiring Ray Kurzweil and how the company likely intends to use language translation to revolutionize the way we share information. From the article: 'Google Translate is not just a tool that enables people on the web to translate information. It's a strategic tool for Google itself. The implications of this are vast and go beyond mere language translation. One implication might be a technology that can translate from one generation to another. Or how about one that slows down your speech or turns up the volume for an elderly person with hearing loss? That enables a stroke victim to use the clarity of speech he had previously? That can pronounce using your favorite accent? That can convert academic jargon to local slang? It's transformative. In this system, information can walk into one checkpoint as the raucous chant of a 22-year-old American football player and walk out as the quiet whisper of a 78-year-old Albanian grandmother.'"
This discussion has been archived. No new comments can be posted.

Why Google Hired Ray Kurzweil

Comments Filter:
  • Awesome post (Score:3, Insightful)

    by iliketrash (624051) on Thursday December 20, 2012 @08:23PM (#42354285)

    OK--this is probably the stupidest and worst-informed /. post I have ever seen.

    • Re:Awesome post (Score:5, Informative)

      by spazdor (902907) on Thursday December 20, 2012 @08:31PM (#42354375)

      Many of the "language processing" problems the OP describes are actually "cognition" problems. If Google is serious about algorithmically translating from "academic jargon to local slang", then they're looking at writing an AI which can in some sense understand what is being said.

      I guess it's a good thing Kurzweil's on board.

      • Re:Awesome post (Score:5, Interesting)

        by Genda (560240) <mariet@ g o t . n et> on Thursday December 20, 2012 @09:04PM (#42354719) Journal

        You need to investigate the entire initiative Google is spearheading around its acquisition of Metaweb. They are building an ontology for human knowledge, and are ultimately building the semantic networks necessary for creating an inference system capable of human level contextual communication. The old story about the sad state of computers' contextual capacity, recounts the story of the computer that translates the phrase "The spirit is willing, but the flesh is weak." from English to Russian and back and what they got was "The wine is good but the meat is rotten."

        The new system won't have this problem. Because it will instantly know about the reference coming from the Bible. I will also know all the literary links to the phrase, the importance of its use in critical historical conversations, The work of the Saints, the despair of martyrs, in short an entire universe of context will spill out about the phrase and as it takes the conversational lead provided by the enquirer it will dance to deliver the most concise and cogent responses possible. In the same way, It will be able to apprehend the relationship between a core communication given in context 'A' and translate that conversation to context 'B' in a meaningful way.

        Ray is a genius for boiling complex problems down into tractable solution sets. Combine Ray's genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware. There's a another famous story about Helen Keller, before she had language. symbolic reference, she lived like an animal. Literally a bundle of emotions and instincts. One moment, one utterly earth shattering moment there was nothing, then Annie Sullivan her teacher placed her hand in a stream of cold water and signed water in her palm. Ellen understood... water. In the next moment Ellen was born as a distinct and conscious being, she learned that she had a name, that she was. I don't know what that moment will look like for machines, I just know its coming sooner than we think. I also can't be certain whether it will be humanities greatest achievement or our worst mistake. That awaits seeing.

        • Dear Aunt, let’s set so double the killer delete select all.

        • Re:Awesome post (Score:4, Insightful)

          by TapeCutter (624760) on Thursday December 20, 2012 @11:40PM (#42355951) Journal
          I would say that Keller already "knew she was", she just didn't have the mental tools to describe it to herself or others, the "internal dialogue" that gives us an ever present narrative in a modern human's mind is impossible without language. If you get into a highly emotional state (such as rage or terror), the narrative is silenced and the senses are more acute, reflexive responses take over, adrenaline pumps through you, pain is suppressed. A champion boxer wins because he is in control of his emotions, if he loses that control for an instant his opponent may very well lose an ear.

          What astounds me is the mild interest in IBM's Jeopardy winning computer, to me it's comparable to the moon landing (which I witnessed), when you question the unimpressed it's clear they don't understand the difficulty of the problem or the significance of the win. Sure the game of Jeopardy is a restricted domain, but it's far broader than what's needed for a search engine that is "smart" enough to "understand" it's user and ask pertinent follow up questions. However that's not where I see the biggest impact on society, the most significant impact will come from widely available and "cheap" expert systems that use this technology, an "academic in a box" that professionals can kick under their desk and consult at will (much like software developers use google as their default documentation but with far less frustrations and dead ends). We already have machines that can organise and rummage through the world's knowledge far better than humans can do with a manual system, for instance software developers such as myself are constantly referring to google for advise on esoteric questions.

          What we are starting to see are machines that can make sense of that pile of factoids significantly better than humans can, machines that understand natural language (or at worst the subset that is human text), they can relate facts, discover new patterns, create and test novel hypothesis to discover new facts within existing data. Sure it takes 20 tons of air-conditioning alone for a "computer" to beat the speed and accuracy of the small blob of jelly inside the head of a Jeopardy champion but the basic "AI"* problem has been well and truly cracked over the last decade, squeezing it into an iphone or scaling it up to a totalitarian demigod is now an engineering problem.

          AI* - as opposed to what is known as the "hard problem of consciousness". The kind of AI that would pass the basic idea of a Turing test for the majority of people, you can claim that such a machine is "intelligent" or argue against it, in a pragmatic sense it's irreverent since there is no agreed definition of "intelligence". Attributes such as intelligence and understanding are applied to computers because we don't have any other words that describe their behavior. Listen to any developer explaining a bug, you will hear expressions such as "it thinks X", "it wants Y", these are universal metaphors for discussing computers, not a description of reality, it's how humans communicate about the behavior of ALL objects (particularly animated ones) and is intimately related to mankind's highly evolved (and innate), "theory of mind".
        • by the gnat (153162)

          Combine Ray's genius with the semantic toy shop that Google has assembled, and the informational framework for an autonomous intellect will become. The real question is how you make something like that self aware.

          Who says we have to make it self-aware to reach the Singularity? A sentient program is only one possible route; others include artificially and massively expanding human intelligence via brain-computer interfaces or bioengineering, uploading our consciousness into the computer (I find this less co

        • "That awaits seeing."

          Why just wait? Why just "[have commercial entities] build it [for commercial interests*] and see what happens"? We're smarter than that [singularity.org].

          * which is a pretty fucking crazy proposition for something of this magnitude. The only thing worse is military. "But there is no other way" -- the put it on hold, and find the way to actually get some discussion and responsibility going. You can crack down on heroin dealers, you can crack down on software pirates, you can regulate this as well. And com

        • by a_hanso (1891616)

          ...recounts the story of the computer that translates the phrase "The spirit is willing, but the flesh is weak." from English to Russian and back and what they got was "The wine is good but the meat is rotten."

          That's nothing. "Out of sight, out of mind" to Russian and back is "Invisible maniac".

        • by mcgrew (92797) *

          The real question is how you make something like that self aware.

          That depends on what you mean by "self-aware". If you mean self-aware like higher order animals, it won't happen in an electronic device, although you'll be able to make it fool people into thinking it's self-aware. Computers are nothing like brains. Computers are nothing more than glorified abacuses.

          Now, when we start making Blade Runner replicants, then we'll build something self-aware. Sentience is a chemical reaction.

          • by mattack2 (1165421)

            What does it being a chemical reaction have to do with it? That's just an implementation detail.

            We're chemical-reaction based computers, as opposed to silicon semiconductor based.

            The fact that we're made of DNA, and were able to figure out and modify the DNA is amazing.. But how is that different from a far more advanced version of the check engine light going on in your car and the car figuring out what's wrong and replacing the broken piece?

            • by mcgrew (92797) *

              If your brain is a computer it's not only non-electric, it's analog. [wikipedia.org] Analog computers don't have rounding errors (electronic ones do suffer from noise).

              We're not "made of" DNA, DNA is just a blueprint. Your car's computer doesn't figure out what is wrong any more than your stereo figures out what station to tune to when you press the button.

              A digital computer is nothing more than a glorified abacus, an analog electronic computer is a glorified slide rule. How many beads do I have to put on an abacus before

      • by Jezral (449476)

        We have something like that at VISL [visl.sdu.dk], but with zero statistical or machine learning or AI aspects.

        We instead write a few thousand rules by hand (largest language has 10000 rules) that look at the context - where context is the entire sentence, and possibly previous or next sentences - to figure out what meaning of a word is being used and what it attaches to.

        E.g.
        Input: "They're looking at writing an AI which can in some sense understand what is being said."
        Output: http://dl.dropbox.com/u/62647212/visl-eng.tx [dropbox.com]

    • by qbitslayer (2567421) on Thursday December 20, 2012 @10:05PM (#42355285)

      The problem with people like Kurzweil, Jeff Hawkins, the folks at the Singularity Institute and the rest of the AI community is that they have all jumped on the Bayesian bandwagon. This is not unlike the way they all jumped on the symbolic bandwagon in the last century only to be proven wrong forty years later. Do we have another half a century to waste, waiting for these guys to realize the error of their ways? Essentially there are two approaches to machine learning.

      1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
      2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

      Luckily for the rest of humanity, a few people are beginning to realize the folly of the Bayesian mindset. When asked in a recent Cambridge Press interview [cambridge.org], "What was the greatest challenge you have encountered in your research?", Judea Pearl [wikipedia.org], an Israeli computer scientist and an early champion of the Bayesian approach to AI, replied: "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own."

      Read The Myth of the Bayesian Brain [blogspot.com] for more, if you're interested.

      • by VortexCortex (1117377) <(VortexCortex) ( ... -retrograde.com)> on Friday December 21, 2012 @02:52AM (#42356965)

        1) The Bayesian model assumes that events in the world are inherently uncertain and that the job of an intelligent system is to discover the probabilities.
        2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

        Then you have AI (machine intelligence) researchers like myself who realize that the world isn't persistent, perfect or consistent, and neither must intelligent systems be. It's plainly obvious that any sufficiently complex cybernetic (feedback loop) system is indistinguishable from sentience because that's what sentience is (your mind is merely a sentient cybernetic system). I have but to look at the hierarchical neural networks and structures of the human mind to realize it's only a matter of time before the artificial system complexity eclipses our own minds'. The true flaw is top-down thinking. That's not the way complex life was made, that's not the way we achieved sentience, that's not the way to cause it to happen artificially either... It's the bottom up approach that works. You can't design sentient intelligences outright, but you can create self organizing systems that have the capacity to acquire more complexity, and evolve more intelligence. Not all machine intelligence systems have an end to the training process -- These don't fit into your bullshit 1) and 2) classifications.

        Also: The level of intelligence that emerges from any complex system is not artificial, it is real intelligence; That the medium is artificial is not important in terms of intelligence. I think "Artificial Intelligence" is a racist term used by chauvinists that think human intellect is far more special than it really is.

        Want to see something funny? Ask an AI researcher if they believe in Intelligent Design. If they say "Yes" then say, "So you think yourself a god?" If they say, "No" then say, "What do you call yourself doing then?". Those working in emergent intelligence will happily reply that they're modeling the same processes that we already know work in nature, the others will be in quite a state!

        • by gweihir (88907)

          Your work will fail, because your basic assumptions are flawed. Typical physicalist blindness. Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success. You have it all figured out (wrongly) and have none.

          • Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success.

            Oh, they have a clue alright. They are convinced that the brain uses Bayesian statistics for perceptual learning. They are wrong.

            You have it all figured out (wrongly) and have none.

            Nope. Why do you put words in my mouth in such a dishonest way? You got a pony in this race? I've only figured out a small part of it but I've been at it for a long time and, lately, I'm

            • by gweihir (88907)

              Competent AI researchers at least notice that they do not have a clue how to model what they want to build, and as such have a change of success.

              Oh, they have a clue alright. They are convinced that the brain uses Bayesian statistics for perceptual learning. They are wrong.

              I said "competent" ones. The only thing that so far has a theory that could deliver is automated theorem proving and derivatives. It does completely fail in practice though due to exponential effort that cannot be bypassed. And no, Bayesian statistics is far too simple to model anything complex enough to require "understanding". Rather obvious though. It cannot scale as it basically is ye old Perceptron in a new disguise. Pure desperation on the side of its proponents, because they have nothing at all to sh

        • by firecode (119868)
          While bottom-down approach (like evolution) may work. It is just a black-box model (somewhat similar to neural networks), which is not going to be very scientific, we cannot understand how it works or how to improve it, unlike bayesian models (which has flaws), or causal models (better).

          In other words, we need new causal+bayesian probabilistic mathematics to process and form meaningful models from data - and it needs to be fitted to the modern physics. The current limitations of handling causality with baye
        • by mcgrew (92797) *

          That the medium is artificial is not important in terms of intelligence. I think "Artificial Intelligence" is a racist term used by chauvinists that think human intellect is far more special than it really is.

          Racist? Chauvinist? Huh? Computers are neither a race nor a sex. And it isn't the intellect, it's the sense of BEING and it's not just humans, it's all animals. I sincerely doubt you guys will come up with that. Yes, you'll be able to fake it and make it look like the machine is self-aware (as you sho

      • by eulernet (1132389)

        "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability;

        The buddhist point of view (from the Advaita Vedanta) is what we are is built upon all causes/effects that we encountered.
        And the first root is the sense of "I am".

        In other words, "I am" came first, then all the remaining derived from this.
        Buddhists call "karma" all the causes/effects.

        • by g0ath (2796355)
          Advaita Vedanta is Hindu, not Buddhist. Further, Buddhists hold that the "I am" (self or atman) is illusory, transient and not pre-existent.
          • by eulernet (1132389)

            Yes, you are right.

            But the "I" is also illusory in Advaita (which means nonduality).
            Maharshi explains that it's like the projection of a movie on a screen, it has no substance but it seems real.

      • by JDG1980 (2438906)

        Judea Pearl, an Israeli computer scientist and an early champion of the Bayesian approach to AI, replied: "In retrospect, my greatest challenge was to break away from probabilistic thinking and accept, first, that people are not probability thinkers but cause-effect thinkers and, second, that causal thinking cannot be captured in the language of probability; it requires a formal language of its own."

        Maybe so, but do we necessarily want to replicate this human trait in artificial intelligences? The human m

      • events in the world are inherently uncertain

        That's right. Hasn't the unvertainty principle [wikipedia.org] pretty much dictated that's how it works on a very small level? Couple that with the butterfly effect [wikipedia.org], and it looks like we live in a non-deterministic universe.

        If you roll one die, you have a random chance of 1 through 6. Roll two dice, and the random factor is still there, but you'll probably get around 7. Roll enough dice and the variance of the random factor are minimized and the probability chart goes from a low arch to a sharp peak. It approaches a de

      • by OdinOdin_ (266277)

        > 2) The competing model, by contrast, assumes that events in the world are perfectly consistent and that the job of an intelligent system is to discover this perfection.

        This is an interesting point because isn't that how Darwin works ? Except it wasn't an intelligent system making a decision it was simply that bad choices and dead ends did not survive the process. During this process an organisms reward feedback loop developed. Why doesn't that simple model not apply to describe what intelligence is

    • by gweihir (88907)

      Indeed. Nothing like that is possible without true AI, and that is not even on the distant horizon. But, like usual in the speech processing community, the visions are grand. They have been ripping off the public for about 30 years with this tune now. Of course, they will not deliver. They never have because the cannot.

      As Kurzweil is actually a strong part of that fraudulent culture (his other scam is the "singularity"), he is exactly the wrong person to hire if you actually want results. Seems to me howeve

  • by Kenja (541830) on Thursday December 20, 2012 @08:24PM (#42354301)
    The main thing he says he will be working on is artificial intelligence that can understand "context". The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.
    • by Anonymous Coward on Thursday December 20, 2012 @08:48PM (#42354545)

      I've spent 10 years learning to think like a search engine if this ruins everything and makes me spend another 10 years to learn to think like a normal person again just so the search engine can translate my thoughts correctly I am gonna be pissed.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      That's great, but I wish they'd somehow find it in their hearts to turn back on exact word matching, even if it's obscurely hidden.

    • by Anonymous Coward

      He talked about how IBM's AI was able to win at Jeopardy -- because it has already read everything and remember everything it read.

      His example was the pun that Watson identified when the humans couldn't:
      "a politician's rant and a frothy desert."

      --> "meringue harangue."

      My guess is Google wants an AI capable of reading all those documents and email, so Google can ask "what's going to be the next hot development in this area, if certain missing pieces are found? what individual discoveries is the

      • by gweihir (88907)

        IBM does not have an AI. When they present to an expert audience, they represent Watson as a kind of expert system on steroids, that does not have and insights or clues, but a lot of purely syntactic association capability. Such as tool is quite useful, but it is not AI.

        • There's a reason it's called *artificial* intelligence. You're right, it's just a lot of syntactic association capability. But it *looks* like intelligence when you observe it. Watson is not really "thinking" like a rational human being, though; that's why it's *artificial.* :)
          • by gweihir (88907)

            You do not get it. Faking intelligence is not the same as simulating intelligence. If Watson was (true/strong) AI, it would not be so lost on some questions.

    • by TubeSteak (669689)

      The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.

      I'm still waiting for Google to stop filtering based on language and country.

      Google.com used to include results from all over the globe, but a while back, they started filtering so that google.com and google.[country] do not return the same results for the same search.

      I understand that they think American results are more relevant to American users, but in doing so, they've limited everyone's ability to see what the rest of the world has to say.

    • by gweihir (88907)

      As there is no working AI at this time, and not even any convincing theory how it could be done in practice (in theory, automated theorem proving solves everything, but at the price of exponential effort), he will fail. But he will burn a lot of resources on the way that could have been spent a lot better. I have no idea why Google hired this fraud.

    • by tlhIngan (30335)

      The main thing he says he will be working on is artificial intelligence that can understand "context". The goal is for Google search to be able to find pages etc based on what you mean rather then on word counts of what you type.

      That's the user-facing aspect of it.

      The other thing is, Google is acquiring massive amounts of information on everyone, so they need an AI to help sort through it and figure out what ads you're supposed to see.

      After all, if Google can determine what you want from your searches, Goog

  • Ridiculous (Score:1, Insightful)

    by Anonymous Coward

    Ridiculous visions and promises that are certain not to see the light of day. :-)

    • Ridiculous visions and promises that are certain not to see the light of day. :-)

      Google translation: Amazing insight; I would like to subscribe to your newsletter!

    • Re: (Score:2, Insightful)

      by LordLucless (582312)

      Sorta like men walking on the moon. Reach, grasp, exceeding, etc.

      • by Sabathius (566108)
        Or the physicists that said heavier-than-air flight was impossible, right up until the Wright brothers achieved it.
  • by MisterSquid (231834) on Thursday December 20, 2012 @08:27PM (#42354337)

    That can convert academic jargon to local slang? It's transformative.

    That right there is going to be one hell of a translation. Presuming all statements from one language can be translated into statements in a different language assumes (or seems to assume) that languages are isomorphic.

    However, there are things that cannot be communicated in the limited vocabulary available to, say, a young adult compared to the expansive vocabulary of, say, a scholar of comparative literature. The same applies for concepts that can only be delivered in medical specialized terminology (disparagingly referred to as "jargon") an that cannot be communicated in layperson language.

    None of which is to say that some ideas (even very important ideas) cannot be translated across linguistic groups, but the idea that Google and Kurzweil are somehow going to produce the Internet equivalent of a Babel Fish is nothing more than a wish.

    • what about converting academic theory to useable data and cut out the fluff and filler.

      • Yeah; I'd be much more interested in a "summary" function. Most things that people say can be concisely summarized in under 2 minutes, no matter how long they talk for.

        • Slashdot's summary function:
          .

          You read it here first!

        • by gweihir (88907)

          Ok, try 2 minutes for these: Incompleteness, functional language, side-channel, entropy, Jorndan normal form, ...

          Not possible without missing essential information. It takes years to understand some things.

           

      • by c0lo (1497653)

        what about converting academic theory to useable data and cut out the fluff and filler.

        Challenge: convert "languages are isomorphic" in something that doesn't have "fluff and filer".

        • I'll go one better and do the whole post. "languages are isomorphic" is itself redundant in that sentence, so the whole phrase could be deleted if you want to delete "fluff and filler". The entire post without (arrogant) fluff and filler is: "Some languages can express ideas that others can't." That MAY be true. However, knowing that while modern computer languages LOOK different, they are in fact generally Turing equalivent, it's reasonable to suspect human languages may be also. Consider x86 asse
          • by c0lo (1497653)

            I'll go one better and do the whole post. "languages are isomorphic" is itself redundant in that sentence, so the whole phrase could be deleted if you want to delete "fluff and filler".

            What's one's "fluff and filler", it's another's treasure.
            For me, your post is an absolute evidence in favour of the above statement: I see your post a convoluted (i.e. with lots of "fluff") way to say
            I surmise all the programming languages are Turing complete, I suspect that natural languages are too. I'll "prove" my assumption by providing a single example based on programming languages and forcing the conclusion that's the same for all natural languages

            Consider x86 assembly and Java. Totally different, right? They actually have EXACTLY the same expressive power, and here's proof.

            Below, an example of why specialized concepts an

          • by narcc (412956)

            They actually have EXACTLY the same expressive power

            What the hell does "expressive power" mean when applied to a programming language? Last time I checked, semantics were extrinsic, not intrinsic, and completely irrelevant to the computer! Computers lack intentionality.

            In a human language, translation needs to preserve semantics. This is a MUCH harder problem; as we all know, you can't get semantics from pure syntax.

            there's no reason to believe dialects of human languages can't be also.

            Except for the blindingly obvious reason above. I blame Kurzweil and his band of singularity nuts for all the recent confusion on issues li

    • All concepts and statements are derived from the universe, you can break down concepts into simpler elements and reconceptualize them to bridge the gap. Most ideas that "don't translate" are poorly conceptualized you can decompose poorly conceptualized ideas and meanings in other languages into more basic elements then re conceptualize it more accurately so that you can communicate it. The same way we make up new words and concepts, you can do the reverse -- break down ideas into their simplest elements,

      • by CODiNE (27417)

        That's interpretation not translation.

        The most ironic thing about this whole thread. Bibles are translated, a scripture may be incomprehensible without certain cultural and historic knowledge. Interpretation goes much further than this.

        • "That's interpretation not translation."

          Interpretation _is required_ for translation, all translations are *acts of interpretation*. To translate one statement to another you have to be able to tell what it is first (an act of interpreting what you are seeing).

          Not only that in this era we're dealing with interpreting languages that are living and have context. More importantly I work in this area. You CAN reconstruct meanings because all languages have a basic subset of functions that compose ALL concept

          • by CODiNE (27417)
            I do understand you point I was picking a post to argue semantics. :-)

            Dealing with awful interpreters is a common experience for me. Most have the habit of following the source language too closely and are practically transliterating. Strangely even certified professionals have a hard time letting go of specific words or reformulating the sentence structure so that it makes sense in the target language. Yes, the language is the box and the idea is substance inside... pull it out, throw away the box and pu
    • by Genda (560240)

      How do we teach people idiomatic content now. I know there are German phrases that translate into nonsense in English and vice versa, but you can translate the "meaning" of the idiom. The whole point of the new semantic engine being created by Google is that the relationship of words and groups of words will be preserved. When a Doctor yells for Dabigatran in an ER because he thinks his patient is suffering from a nonlocalized DVT, his staff knows what's happening and how to respond. I (a person off the str

      • Yeah, lawyers need this sort of thing. You yell for Dabigatran in the ER for treatment of an acute DVT you're Doing It Wrong (it's for maintenance after initial treatment with heparin).

        But WTF - what do you need a 'semantic web' for when you can just type in "Dabigatran" in your search engine of choice and get the information that you desire?

        A semantic network would have all these things related and through the interaction of a human being would be able to provide the necessary information to explain what a sentence means.

        Maybe Kurzweil can explain this sentence to me. I sure can't figure out where you are going.

    • If you think medical jargon can't be translated into understandable language, I feel for you. Hopefully you'll get a doctor who does so. I normally do. Example - "manifesting acute folliculitis" means "has a pimple on their head". Some precision may be lost, certainly, but puerile who don't know medical jargon and want it in plain English probably don't need quite the level of precision the medical terminology allows. I'm a programmer, I can flumox someone with a bunch of jargon, or I can use technical v
    • Medical or other specialized jargon definitely CAN be translated into 6th grade English. Here's the simple proof. Medical textbooks explain the terms. Every doctor/engineer etc. is taught those terms by having them translated into words they already know. For example, somewhere along the way someone tells the future doctor "tibia means shin bone". The fact that non-doctors can be taught the terms in medical school proves that for ALL such terms there must be a translation ala "tibia=shinbone". If ther
    • by gweihir (88907)

      That can convert academic jargon to local slang? It's transformative.

      That right there is going to be one hell of a translation.

      It is actually very easy. For example to translate the language of calculus to standard language, just look at countless volumes of books entitled some variant of "Calculus 1 + 2". Of course, reading and understanding them can take years and is well beyond of the average person, but the translation is already there. No, sorry, it cannot be done simpler. You cannot understand academic jargon translated in any fashion, unless you understand the concepts referred to.

      Executive summary: Another fraudulent AI pro

  • To paraphrase Doug Lenat: machine translation is bogus.

    • by gweihir (88907)

      Not for very simple strongly structured things, like, say, a train timetable. For anything that requires understanding, it is not even clear whether the problem can be solved.

  • Did Kurzweil become some kind of expert in machine translation when I wasn't looking?

    • by gweihir (88907)

      Kurzweil is a fraudster. As such, he is clearly an expert at everything stupid but rich people are willing to give him money for!

  • by slacka (713188) on Thursday December 20, 2012 @09:31PM (#42354985)

    This is a great move for Google's AI research, since their current Director of Research,Peter Norvig, comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While these techniques have their use in specific areas, they will never lead us to a general purpose strong AI.

    Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
    Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "We need to meet up" to Chinese. Google translates it to (can't type Chinese in Slashdot?)
    , meaning "We need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.

    Leaders in AI like Kurzweil and Hawkins are going to finally crack the AI problem. With Kurzweil's experience and Google's resources, it might happen a lot sooner than you all expect.

    [1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-for-the-future-of-ai [tor.com]

    • by raddan (519638) * on Thursday December 20, 2012 @10:10PM (#42355321)
      Yeah, but there's a reason why statistical models are hot now and why the old AI-style of logical reasoning isn't: the AI stuff only works when the input is perfect, or at least, planned for. As we all know, language doesn't really have rules, just conventions. This is why the ML approach to NLP is powerful: the machine works out what was probably meant. That's far more useful, because practically nobody writes well. When Abdur Chowdhury was still Twitter's main NLP guy, he visited our department, and guess what-- people even write in more than one language in a single sentence! Not to mention, in the old AI-style approach, if you fill a big box full of rules, you have to search through them. Computational complexity is a major limiting factor in all AI problems. ML has this nice property that you can often simply trade accuracy for speed. See Monte Carlo methods [wikipedia.org].

      As you point out, ML doesn't "understand" anything. I personally think "understanding" is a bit of a squishy term. Those old AI-style systems were essentially fancy search algorithms with a large set of states and transition rules. Is that "understanding"? ML is basically the same idea except that transitioning from one state to another involves the calculation of a probability distribution, and sometimes whether the machine should transition is probabilistic.

      I think that hybrid ML/AI systems-- i.e., systems that combine both logical constraints and probabilistic reasoning-- will prove to be very powerful in the future. But does that mean these machines "understand"? If you mean something like what happens in the human brain, I'm not so sure. Do humans "understand"? Or are we also automata? In order to determine whether we've "cracked AI", we need to know the answers to those questions. See Kant [wikipedia.org] and good luck.
      • Do humans "understand"?

        I guess this is anecdotal, but I'm human and I stopped understanding about halfway through your otherwise high-quality explanation

    • by raftpeople (844215) on Thursday December 20, 2012 @10:18PM (#42355379)
      "Leaders in AI like Kurzweil and Hawkins"? Are you sure you're following who is making real progress in "AI" or at least machine learning? Go check out people like Hinton.
      • by slacka (713188)

        "Leaders in AI like Kurzweil and Hawkins"? Are you sure you're following who is making real progress in "AI" or at least machine learning? Go check out people like Hinton.

        Geoffrey Hinton’s work in back propagation and deep learning are an incremental improvement over the overly simplistic neural networks of the 90s, but "real progress", not even close. His focus on Bayesian networks has failed to deliver just like the symbolic AI that preceded it. Until AI researchers like Hinton get over their obsession with mathematical constructs with no foundation in biology, we will never have true AI. To succeed, we will need to will need to borrow from nature's engine of intell

        • "He describes the brain as a massively parallel pattern recognition machine. At the core of the neocortex are millions of hierarchically arranged pattern recognition modules working together to model and predict our environment."

          Do you think there is a single person in this field that doesn't think that? Why does he need to "argue" that when it's pretty obvious to everyone?
          • They all think it. Thinking it isn't the issue. The issue is whether it's a useful model to try and replicate in software. I think it is, but I didn't see much of anything like that in the AI/NLP classes I took.

            I suspect part of the problem is, it's hard to come up with a test question that involves a neural net with more than three perceptrons.

        • Geoffrey Hinton’s work in back propagation and deep learning are an incremental improvement over the overly simplistic neural networks of the 90s, but "real progress", not even close. His focus on Bayesian networks has failed to deliver just like the symbolic AI that preceded it

          Your arguments are not based on facts. The truth is that Kurzweil likes Hawkins' approach and Hawkins' Hierarchical Temporal Memory is a Bayesian network. It was designed from the ground up to use Bayesian statistics for perce

    • by gweihir (88907)

      Dream on. Kurzweil is not a leader at anything, he is your basic fraudster, with a specialization in technology. What he can to well is burn though money. What he is fundamentally unable to is deliver anything worthwhile, because he doe snot even understand the basic limitations of this universe. Of course he is also a stellar salesman, like any good fraudster.

    • Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.

      I don't get this. Hawkins' Hierarchical Temporal Memory is a Bayesian network. Dileep George, the mathematician and Bayes fanatic who co-founded Numenta, made sure of that. The last I heard, the Bayesian Brain hypothesis is something that both Hawkins and Kurzw

  • by TheRealMindChild (743925) on Thursday December 20, 2012 @09:34PM (#42355001) Homepage Journal
    You ever wanted to know why Google wanted to look at your email, your instant messages, transcribe your phone calls, and all for free? This is why
    • Yeah not because of commercial motives of sellong adwords, but because they want to give stroke victims their clarity of speech back.

      Are you serious ? It would be nice if they can pull it off, but lets not pretend that Google is an philantric institution. These things are pure marketing like we see a lot, but never go very concrete.
  • The "Dialectizer" [rinkworks.com].

    Here's an example. [rinkworks.com]

    I'm sure that he'll be very happy, there. Smart, eloquent guy, but not one I especially follow. There's a number of folks like that at Google. I doubt he's someone who would cause much damage, and he does bring a lot of funky intellectual PR to the joint.
  • How would google translate speak with an accent? As far as I'm aware the accent is unique from the actual language, it would need to learn by listening to phrases that have define accents/dialects. Otherwise it wouldn't have that information using translation algorithms alone. If this is incorrect, please explain.
    • How would google translate speak with an accent? As far as I'm aware the accent is unique from the actual language, it would need to learn by listening to phrases that have define accents/dialects. Otherwise it wouldn't have that information using translation algorithms alone. If this is incorrect, please explain.

      Probably the same way that speech recognition can deal with accents. Words are made up of phonemes which are much more limited than the actual number of words. I imagine that most dialects have slightly different phonemes or words that traditionally use different phonemes from other dialects. Once a known piece of speech is matches with the spoken phonemes, then it can be compared to a list of dialects and accents and the correct one chosen for translation whether converting to text or speech.

  • No thanks, will not get a click from this guy.

  • "[A technology that] turns up the volume for an elderly person with hearing loss."

    ...

    A hearing aid?

    • by gweihir (88907)

      "[A technology that] turns up the volume for an elderly person with hearing loss."

      ...

      A hearing aid?

      Not good enough. Hearing aids work well, are well understood and solve the problem in a satisfactory and cost-efficient way. How can you be satisfied with so little?

      #pragma sarcasm off

      I think your comment describes exactly what is going on here.

  • Just by voicing my words with a British accent!

    The true meaning of AI. ;)

  • What benefit is it to Google to hire a crackpot who is known for being high-profile and vocal about his crackpottishness, and who has made a career out of being a media personality promoting himself? Whatever benefit Google gets from his actual work is more than overshadowed by having their brand associated with a crackpot. Most businesses don't want to touch something toxic like that.

  • he got on the somewhat kooky Singularity and Immortality bandwagons. He did a lot of the early work in optical recognition and voice interfaces pretty much on his own.
  • by PJ6 (1151747) on Saturday December 22, 2012 @10:05PM (#42372749)
    This is a late post and nobody will read it, but I will say it here anyway.

    Free translation between all languages is just a nice to have compared to the real thrust and purpose of their effort: Human Intermediate Language, and the compilers / reflectors that go with it. It's a hard nut to crack, but this is a natural progression for Google. And applied at Google scale with Google resources... well that could be scary powerful.

    I would guess they already have a proof of concept and some execs are shitting themselves over the possibilities. Strong A.I. or something that looks like it is not too far behind. Ask the whole goddamn internet what all of human civilization thinks the meaning of life is, and actually get brilliant results back, with citations, in the language of your choice. This is why they brought Ray on.

What this country needs is a dime that will buy a good five-cent bagel.

Working...