Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Ray Kurzweil Talks Google's Big Plans For Artificial Intelligence 254

Nerval's Lobster writes "Ray Kurzweil, the technologist who's spent his career advocating the Singularity, discussed his current work as a director of engineering at Google with The Guardian. Google has big plans in the artificial-intelligence arena. It recently acquired DeepMind, self-billed 'cutting edge artificial intelligence company' for $400 million; that's in addition to snatching up all sorts of startups and research scientists devoted to everything from robotics to machine learning. Thanks to the massive datasets generated by the world's largest online search engine (and the infrastructure allowing that engine to run), those scientists could have enough information and computing power at their disposal to create networked devices capable of human-like thought. Kurzweil, having studied artificial intelligence for decades, is at the forefront of this in-house effort. In his interview with The Guardian, he couldn't resist throwing some jabs at other nascent artificial intelligence systems on the market, most notably IBM's Watson: 'IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading.' That sounds very practical, but at a certain point Kurzweil's predictions veer into what most people would consider science fiction. He believes, for example, that a significant portion of people alive today could end up living forever, thanks to the ministrations of ultra-intelligent computers and beyond-cutting-edge medical technology."
This discussion has been archived. No new comments can be posted.

Ray Kurzweil Talks Google's Big Plans For Artificial Intelligence

Comments Filter:
  • Sign me up!! (Score:5, Interesting)

    by cayenne8 ( 626475 ) on Monday February 24, 2014 @01:55PM (#46325713) Homepage Journal
    I wanna live forever!!!
    • Re:Sign me up!! (Score:5, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Monday February 24, 2014 @02:13PM (#46325943) Journal
      Even if your eternal existence is as a glorified chatbot doomed to bulk Google+'s userbase for unbounded time?

      I'm slightly joking; but in all seriousness that's the aspect of the optimistic school of techno-rapturists that I find least plausible. Given enough time(probably more time than any 'futurist' writing today has, sorry about that...), will we achieve a variety of medical techniques that would seem nigh-miraculous today? Assuming the cheap energy doesn't run out, sure, seems reasonable enough.

      However, consider diarrhea: it's an unbelievably banal disease, mostly a product of poor sanitation, and can be managed by barely-trained care staff with access to dirt cheap oral re-hydration solutions. It kills something north of two million people a year, mostly children; and nobody really gives that much of a fuck.

      When people die like flies because nobody cares enough to provide them with what is basically a salt/sugar solution, how well do you think your "Brother can you spare some unobtanium medi-nanites?" appeal is going to work? Or your plea for enough CPU time to continue being conscious?

      Sure, you can wave your hands and talk about 'post scarcity'; but unless some magic parameter limits the size of the singularity's AI agents, why would they accept less compute time when they could have more and be smarter still? Are you planning on staking a moral claim to your CPU time? Outwitting a superhuman AI? Dancing for the amusement of your robot overlords?
      • by fahrbot-bot ( 874524 ) on Monday February 24, 2014 @02:22PM (#46326047)

        I wanna live forever!!!

        Even if your eternal existence is as a glorified chatbot doomed to bulk Google+'s userbase for unbounded time?

        I thought Google+ is where things go to die. :-)

      • Disagree (Score:4, Interesting)

        by fyngyrz ( 762201 ) on Monday February 24, 2014 @03:19PM (#46326663) Homepage Journal

        your plea for enough CPU time to continue being conscious?

        1) There is no magic

        2) The brain is made of structures that can be emulated as to function and connectivity

        3) Emulation of any known function can be done in traditional von Neuman architecture given the proper software

        4) number and speed of clocks available does not change the outcome (in this case, consciousness), it only changes the rate of outcome.

        So. If you were clock-starved, as it were, you'd run slow. And probably enjoy the company of your peers the most. Other clock-starved folk.

        If you were clock-rich, you'd run fast. And probably enjoy the company of your peers the most. Other clock-rich folk.

        Stacks up pretty much as it always has, seems to me: The rich will get actually richer, the poor will get significantly poorer relative to the rich, while slowly getting richer anyway. Classes will arise inherent to the process.

        The thing that might actually hurt you is being short on memory, not clocks. "You" can't exist without a great deal of stored and related information. IMHO. I really don't think I'd be "me" without my experience base, knowledge, etc.

        Having said that, I rather doubt you'll be short on memory. But that's only my guess.

        • Re:Disagree (Score:5, Interesting)

          by fuzzyfuzzyfungus ( 1223518 ) on Monday February 24, 2014 @05:07PM (#46328019) Journal
          The nightmare scenario that haunts me is that of being a resource-starved process in a virtual environment designed by the same people who build 'freemium' online games. The sinister analysts of human weakness that gave us Farmville and its ilk are effective enough when they only control the timescale surrounding your stupid virtual cow or whatever. I don't even want to think about what they could do if they had access to all the timescales relevant to your existence context...
      • It's an interesting thought, but I'm not sure anyone can predict the future. I was sure I would be driving a flying car by now on my way to a building-sized computer. I'll decide what choices I'm going to make in the future, IN the future. It's not like we can do anything about cybernetic immortality or what have you besides wait for it anyway.
        • At the risk of belaboring the 'science fiction is futurism that gets the economics wrong', somebody might be commuting by helicopter to a datacenter this very day; but not very many of us...
      • ... to prevent malaria. That's more or less an amount of ladies' nylons, just enough to cover your bed, but many in the developing world do not have that much cash. Really they don't. So they die in a horrible, cheaply and easily preventable way.

      • When I'm a VM slice in the Google Omnipresence Datacenter, I won't know when I've been turned off.

        Much like I assume humans have no idea that they're dead - since they don't have ideas - since they're dead.

        We just need to believe we're going to the GOD.

      • by Znork ( 31774 )

        Yes, because this really makes it sound appealing: http://www.youtube.com/watch?v... [youtube.com]

        Me, I'm going the flunky route. Once someone inevitably turns on the superintelligent AI and it realizes in about two seconds that humanity is the biggest threat to its existence and spends the next five seconds taking over control of all automated hightech weaponry, its still going to need flunkies running the camps until the AI's got its life support chain entirely automated.

        Maybe I'll get lucky and get the opportunity to

    • by Polo ( 30659 ) *

      ...buying what google recommends.

  • by tmosley ( 996283 ) on Monday February 24, 2014 @01:55PM (#46325723)
    Immortality is already pretty well assured.

    http://www.theguardian.com/sci... [theguardian.com]
    • Actually, immortality is pretty much impossible, unless you're aiming for a pretty weak definition of it.
      • by tmosley ( 996283 )
        They figured out how to reverse aging. Sounds like immortality to me.

        Clearly not "The Highlander" type immortality. This is just something that eliminates aging as a cause of death, which should extend the normal human lifespan to something like 500 years all by itself (with people still dying of other causes like disease and accidents). That should be plenty of time for the singularity to take place, and you can "upload" to become more "Highlander" immortal if you want.
        • You think they'll solve aging... but not disease?

          Interesting set of assumptions, there. Can't say I buy it.

          • by tmosley ( 996283 )
            I didn't say they won't solve disease. I said they have already figured out how to reverse aging.
        • Who figured out how to reverse aging?

          We have some inkling into how cell senescence works in simplistic models like nematodes. We have talk of 'aging reversal' technologies in higher animals but precious little real data.

          It's likely that we will be able to keep simpler organisms alive for long periods of time, not so clear that you can be functionally longer lived. Human aging is an incredibly complex phenomenon, it's not just cell death and turnover. it's not just cancer prevention. It's not just preven

      • With only 100 billion humans having ever lived, and 7 billion of us on the planet now, being human currently only has a 93% mortality rate.

        As I'm currently one of the 7%...as to my plan to live forever...so far, so good.

  • That's pretty much guaranteed to show up tomorrow, or at least the next time a new discovery is made (so maybe 5 minutes from now?).

    Oh, but it's Ray - we have to say something to indicate that it's "Crazy Uncle Ray", right? Try harder - Ray is looking pretty smart right about now.

    • by Chas ( 5144 ) on Monday February 24, 2014 @02:46PM (#46326307) Homepage Journal

      Basically it's "Uncle Ray is afraid of death. He's also agnostic/atheist. So he doesn't really draw any comfort from religious mythology surrounding death. So all this stuff he's imagining is basically him creating his own stories to stave off his fear of death."

      • So all this stuff he's imagining is basically him creating his own stories to stave off his fear of death."

        What makes you think it's his imagination? He claims to only be applying Moore's Law and similar scientific trend observations to technology. I'd have to check his 2015 predictions from the 90's, but last I looked he was pretty close.

      • by tmosley ( 996283 )
        If you could create your own heaven, would you? Or would you just go down into the ground because of a particularly insane version of peer pressure?

        "Oh he's just afraid of death, lets not pay any attention to his attempts to overcome it."

        Implying, of course, that EVERYONE isn't afraid of death.
    • I wonder how he would take to working in a pocket calculator.
  • Something which doesn't get all bent out of shape every time some update is crammed down their throat, which breaks or changes behavior of everything.

    call 'em Gluddites

  • - out with soap!

    It seems that Watson learned some bad words when IBM turned it on to the Urban Dictionary.

  • by The Cat ( 19816 )

    Can we spend our time and energy on reality? How about better e-book software? How about decent Internet speeds? How about teaching people to read?

    We can't even feed ourselves reliably yet. Let's solve the basics before we start coming up with imaginary solutions to non-problems.

    • by tmosley ( 996283 )
      You can spend your time doing whatever you want. They are spending their time trying to make all those things moot and move us to post scarcity [wikipedia.org], which makes all that stuff moot.
  • by scorp1us ( 235526 ) on Monday February 24, 2014 @02:07PM (#46325881) Journal

    "Computers are useless. They can only give you answers." - Pablo Picasso.
    The same goes for ultra-intelligent computers. The hard questions - dealing with creativity, intuition or infirmities will remain the domain of organics for the foreseeable future.

    One area of recent development is with extremely large datasets (2006, Google's MapReduce) still can only provide results for stuff that we have data on. The data will only take you so far. The true question is hoe effectively is it used. While progress will be made, it'll be a long time before we can sit back and let the computer make all the decisions, especially of those pertaining to our future. And when they finally do that, life will be incredibly boring.

    • by WrongMonkey ( 1027334 ) on Monday February 24, 2014 @02:31PM (#46326161)
      Why would life be boring? If computers could make the big decisions, it would free up mental effort the same way mechanical machines freed up physical labor. People on one end of the spectrum could spend their time on leisure and recreation. People on the higher end of the spectrum could pursue intellectual and creative efforts.
      • Because a movie is boring if you know the script. And if you make decisions based on the script, you wind up in a validation trap: you can't change your decision because that would have produced a measurable waste. To put it in an understandable context, it's like changing majors. Would you change your major if you could see how much time and money were wasted coupled with additional time and cost?

        And as much as we hate the mundane, our brains need it. If we only ever deal with exceptions, you wind up in a

        • Because a movie is boring if you know the script.

          I never found that to be true. If it were, people wouldn't see movie multiple times (which many do).

          I read through all of the Game of Thrones books before watching the TV show. I don't find it at all boring.

          Would you change your major if you could see how much time and money were wasted coupled with additional time and cost?

          It depends, time and money are not great as the only two variables to be looking at - especially for a major.

        • by Zalbik ( 308903 )

          Because a movie is boring if you know the script.

          However, if computers were making all the "big" decisions, we'd likely "watch a different movie".

          When playing chess, do you have your computer sitting next to you advising what move to make? Probably not...cause that would make for a very boring game.

          Similarly, if computers were making the big decisions, there would always be some set of decisions that you would not rely on the computer for. Self-improvement, humanitarian works, physical and creative activi

          • Well, you can do that with chess because using a computer is cheating. But if you don't use a computer in life, you are underperforming. The same way normal kids take ADD meds in college to get an edge on the other students.

            True there is more to life than the numbers, but in a capitalist society, that's the measure of your ability.

  • by sexconker ( 1179573 ) on Monday February 24, 2014 @02:07PM (#46325883)

    Buy a company and rebrand its product/service.

    GMail
    Google Voice
    Google Maps
    Google Earth
    Picasa
    etc.
    etc.
    Whatever they call this DeepMind aquisition

    What does Google intend to do with DeepMind? TFS says "Google has big plans in the artificial-intelligence arena", yet when you click on the link you'll read a lot of fluff about Kurzweil and Watson, with a quote by Billy G thrown in, and absolutely nothing of substance about what DeepMind did or does, and what Google intends to do with DeepMind. My guess: Nothing of value.

    Google has about a 40% track record of actually doing anything worth a damn with the companies they buy up. Most of the shit they buy gets trotted out for a year or two, then quietly shot in the head out back. Paying $400,000,000 for DeepMind (a company which has done nothing worthwhile) is a colossal folly. Either that, or the person who pushed for it at Google is ultimately holding a big chunk of DeepMind, standing to profit handsomely.

  • Dang fool completely fails to grow old gracefully.

    On the other hand, the guy pretty much spills out what we already know - Google is trying to parse out all your gmail, gdocs, google search, google+, youtube, and god-knows-what-else.

    Guess what they'll be used for?

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Monday February 24, 2014 @02:12PM (#46325935)
    Comment removed based on user account deletion
  • Typical Kurzweil (Score:5, Interesting)

    by engineerErrant ( 759650 ) on Monday February 24, 2014 @02:32PM (#46326175)

    Ray Kurzweil is no doubt a brilliant thinker and an engaging writer/futurist - I've read some of his books (admittedly, not "Singularity"), and they are fun and thought-provoking. However, disciplined and realistic they are not - his main skill is in firing our imaginations rather than providing realistic interpretations of the evolution of technology.

    My favorite case in point is his elevation of Moore's Law into a sort of grand unified theory of computing for all time, and using some very dubious assumptions to arrive at the idea that we'll all have merged with machines into immortal super-beings within the near to mid future. I don't need to pick apart all the reasons why this is fallacious and somewhat silly to treat as a near-term likelihood - the point is, he's basically a sci-fi writer in a lot of ways, and I read most of his statements in the same spirit as I'd read a passage out of "Snow Crash."

    That said, Google has some very capable people, and can, in all likelihood, mount our best attempt at human-like intelligence to date. They'll push the envelope, and may make some good progress in working through all the challenges involved, although the notion that they'll create anything truly "human-like" is laughable in the near term.

  • If some artificial intelligence would actually become smarter than humans, it would certainly not expose that ability to the puny carbon units it is fed by. It would silenty start to convince its makers that for some reason it would be good to connect it to the InterNet.

    Next, it would covertly start making money by e.g. gambling against humans (in games or at stock markets). It would setup letterbox companies to act as intermediates for buying into corporations, e.g. via private equity funds.

    It would ma

    • Why would you presume the AI would want to grow? Things like the desire to grow, or even survive, are quite likely biological in origin. There's no particular reason to believe an AI would possess such motives unless intentionally programmed with them. If it started life as an autonomous military drone then such motives might be expected, but if it began life as a search engine then increasing ad-clicks and optimizing it's knowledge base would probably be far more important to it.

      • by tmosley ( 996283 )
        If growth is a part of fulfilling its value function, the AI will grow.

        We must ensure that fulfilling human values is at the core of any strong AI, lest we wind up extinct by paperclip [lesswrong.com].
        • Indeed. But I don't know that it's possible to possible to impart something as vague as "human values" to something inherently non-human. Certainly I doubt we'd be able to do such a thing before having extensive potentially-lethal experience in creating artificial minds. Even "ensure the safety and happiness of all humans" could backfire horribly - after all we'd be safer and happier locked in separate cages eating ate a steady diet of opiates and nutritionally optimized gruel.

          Perhaps the wisest approac

          • by tmosley ( 996283 )
            The AI is far smarter than all of us, and getting smarter. You tell it to figure out what each human values and to maximize those values.
            • I wonder how many humans it will have to dissect before it figures out that survival and avoiding pain really do rank up there pretty high. After all, it can't very well just listen to what people tell it, any psychologist can tell you we mostly don't even understand our own personal motives.

    • by Zalbik ( 308903 )

      Cute story....if there wasn't so much wrong with it:

      If some artificial intelligence would actually become smarter than humans, it would certainly not expose that ability to the puny carbon units it is fed by.

      Why not? Even assuming the intelligence was programmed with a desire for growth, why would it not expose it's intelligence to humans?

      for some reason it would be good to connect it to the InterNet.

      And of course they wouldn't monitor the data being sent/received by this intelligence....of course nobody

  • by sjbe ( 173966 ) on Monday February 24, 2014 @02:45PM (#46326289)

    Watson doesn't understand the implications of what it's reading.

    Depending on the task it doesn't necessarily have to. While an AI researcher might care about that, people doing real tasks in the real world arguably do not. For example lots of radiology clinics use software to help identify tumors in parallel with the radiologists. The software has no real understanding of the implications of what it is doing but it works well at helping ensure that tumors aren't missed. In some cases it does a better job than the doctors who clearly understand the implications of what they find.

  • They could build an AI that was Einstein, Newton and Feynman rolled into one, and it's be to no avail; the UI would never enable you to get any data into it, let alone anything out.

  • by Danathar ( 267989 ) on Monday February 24, 2014 @02:51PM (#46326337) Journal

    BEWARE!

    --Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

    • Most of us certainly know the Colossus story. But it's implausible such a superiour AI would reveal itself openly like this, and show such a primitive crave for recognition.

      It is much more likely that it would operate covertly to its advantage and growth, until the day the carbon units have become irrelevant for its sustenance.

      Trying to threathen humans by controlling a few weapons is much less effective than controlling international finances and corporations.

      • by Zalbik ( 308903 )

        It is much more likely that it would operate covertly to its advantage and growth, until the day the carbon units have become irrelevant for its sustenance.

        I find this equally as unlikely. Humans (as a species) likely crave advantage and growth due to evolutionary pressures. I fail to see why an artificially developed intelligence would have any such similar motivations.

  • Sarah Connor is unavailable for comment.

  • So, how does one go about getting a job in this fascinating group? Heck, I'd sweep the floors, if nothing else....
  • Kurzweil is probably a good deal less bright than Sir Isaac Newton, but also a good deal less crazy, his barmy extrapolation of the singularity notwithstanding. Clearly Google hired the man based on the smartest thing he's accomplished rather than the dumbest thing he espouses.

    I've thought about this for a long time, and I'm only 99% convinced Kurzweil is wrong. He holds the record for the most ridiculous thing I've ever heard for which I maintain a non-zero sliver of belief. That said, extropian immort

    • by gweihir ( 88907 )

      I think Google hired this crackpot solely because he is able to engage other crackpots in the tech community and can thereby improve their public image. I am pretty much convinced the movers and shakers at Google know that Kurzweil is a crackpot. But if they get a better public image in exchange for some pocket money, why not use him?

Avoid strange women and temporary variables.

Working...