Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Facebook Privacy Social Networks Technology

Facebook AI Director Discusses Deep Learning, Hype, and the Singularity 71

An anonymous reader writes In a wide-ranging interview with IEEE Spectrum, Yann LeCun talks about his work at the Facebook AI Research group and the applications and limitations of deep learning and other AI techniques. He also talks about hype, 'cargo cult science', and what he dislikes about the Singularity movement. The discussion also includes brain-inspired processors, supervised vs. unsupervised learning, humanism, morality, and strange airplanes.
This discussion has been archived. No new comments can be posted.

Facebook AI Director Discusses Deep Learning, Hype, and the Singularity

Comments Filter:
  • Quite a sensible guy.

    • by Anonymous Coward

      Yeah. This "Facebook AI Director" seems almost human...

      • by bouldin ( 828821 )

        I enjoyed what this guy had to say, too, but I was curious about what he is going to do for facebook. For that matter, what AI can do for facebook. The closest I could find was this:

        Facebook can potentially show each person on Facebook about 2,000 items per day: posts, pictures, videos, etc. But no one has time for this. Hence Facebook has to automatically select 100 to 150 items that users want to see -- or need to see.

        I thought the whole point of facebook was to keep up with your friends. *shrug*

        • by tgv ( 254536 )

          No, the whole point of facebook is to sell ads. Anything they can do to improve that, either by selling more ads or by making the end user more involved contributes to fb's selling power. So if people like automatic face recognition or link suggestions or whatever, that will support fb's business.

        • I enjoyed what this guy had to say, too, but I was curious about what he is going to do for facebook. For that matter, what AI can do for facebook. The closest I could find was this:

          Facebook can potentially show each person on Facebook about 2,000 items per day: posts, pictures, videos, etc. But no one has time for this. Hence Facebook has to automatically select 100 to 150 items that users want to see -- or need to see.

          I thought the whole point of facebook was to keep up with your friends. *shrug*

          This is a "yes, but..." kind of situation. Yes, the point is to keep up with your friends (and to pay for this by interjecting ads inbetween), but the problem is once you cross a certain threshold, trying to read a strictly chronological timeline on your screen can become quite impractical. To make matters worse, people who use Facebook can have dramatically different levels of output; while some folks will only ever post text or a picture when it's truly important and/or generally interesting, others post

  • by nospam007 ( 722110 ) * on Tuesday February 24, 2015 @02:13PM (#49120425)

    It would be really cruel if Skynet awakes and wants us to 'LIKE' it.

  • by gurps_npc ( 621217 ) on Tuesday February 24, 2015 @02:29PM (#49120527) Homepage
    Glad to hear from an intelligent person, rather than an obsessed 'futurist' that has mistaken wishful/paranoid thinking for scientific projections.

    I would have added that the concept of the 'singularity' assumes multiple 'facts' that are extremely unlikely. In part because if they were true, science would already have been much farther along. Also in part because they confabulate different definitions of words, most often 'intelligence'. When AI people are talking about intelligence they are generally not using the word in the same way that a biologist, or worse, a priest. would.

    • Priest? (Score:2, Flamebait)

      by ArcadeMan ( 2766669 )

      You think believing in a magical omnipotent being living in the sky denotes a sign of intelligence?

      • by Anonymous Coward

        Buddhist priests don't believe in any gods at all. But they are still priests.

      • by itzly ( 3699663 )

        A priest is somebody that tells other people to believe. It's not required that the priest holds these beliefs himself.

        • by Anonymous Coward

          In practice, the primary offices of a priest are to:

          1) Provide consolation services, including grief counseling, visitations to the sick, and emotional support for people struggling with tough times.
          2) Provide moral guidance, in particular to people who find themselves embroiled in confusing and/or emotionally-charged situations.
          3) Provide family activities and family counseling services.
          4) Care for the financial, legal, and mundane needs of elderly people who don't have families to do this for them.
          5) Lead

          • by itzly ( 3699663 )

            Let me rephrase my comment as such: A priest is somebody that does items 1-7 on your list. It's not required that the priest holds any beliefs himself.

      • You think believing in a magical omnipotent being living in the sky denotes a sign of intelligence?

        I'm an atheist, but I think you're over-reacting. I think OP was just pointing out that human intelligence is indistinguishable from things like free will, morality and purpose. Maybe he should have said philosopher instead.

    • When most AI people are talking about artificial intelligence, they are talking about narrow "intelligence". This is why in Russell & Norvig's book they quickly move away from the term "intelligence" and instead speak of "agents" working in a particular "task environment", and whether the agents behave rationally or not. For example, a chess program may be able to win chess games against a grandmaster chess player, so we say this agent is performing rationally within this specific task environment. The

      • The reason you don't hear a ton of interesting stuff coming from strong (general) AI research and interest in the field is limited is simple: strong AI is pretty damn useless until you reach the critical point where it matches (or really exceeds) human intelligence. An AI program with the effective intelligence of a worm/mouse/rat/monkey or whatever isn't interesting outside of academia.

        I suspect that when strong AI comes around it will be rather sudden for most people, who simply won't see it coming. I dou

        • An AI program with the effective intelligence of a worm/mouse/rat/monkey or whatever isn't interesting outside of academia.

          Of course it would be fucking interesting.

          This is just another excuse by "strong AI" supporters for not producing anything that a sane human being would consider proof of machine intelligence.

          and honestly human intelligence is really rather unremarkable no matter what some people like to believe

          If it's all so completely trivial and uninteresting, just show us all an artificial intelligence and stop wasting our time.

  • Cargo Cult Science (Score:5, Insightful)

    by Kunedog ( 1033226 ) on Tuesday February 24, 2015 @02:56PM (#49120713)
    If you've never read it before, Feynman's original essay is more worth your time (especially the part about the lab rats).
    http://neurotheory.columbia.ed... [columbia.edu]
    • by gweihir ( 88907 )

      Thanks for linking that article, it is very, very insightful. I found that this term is so very descriptive about what is mostly going on in AI research that is publicly visible, it is staggering. The problem seems to be that many people cannot recognize more than the shape of a thing and are completely unaware that it does in no way describes what the thing is. That scientists fall to the same delusion is rather tragic.

      As a scientist myself (now only a very small part of my time), I found that Feynman is e

  • There is a point where the first marginal barely even an AI, wouldn't win any Turning contest, largely useless AI will be created. But if the algorithm is evolutionary in nature it could be the point where it then improves itself, then improves itself, and so on until pretty much out of nowhere you have an indisputable AI.

    I regularly employ genetic algorithms and can say without hesitation that I have little idea how they got to where they got and the results are often fantastic. But my code is usually a
    • by itzly ( 3699663 )

      You won't get AI by messing with some genetic algorithm for a day, trying to do something completely different. The search space is just too big to stumble upon AI accidentally.

    • by Anonymous Coward

      The problem with the idea of a recursive AI-designing AI is that there is no reason to believe that it can continue very far. The idea that it won't stabilize at some point is taken for granted. Why, after a few iterations, wouldn't the AI look at it's current design and be unable to improve it, or only make diminishingly small incremental improvements that each take longer than the previous one? The idea of a singularity assumes there aren't any limits, like extrapolating the population growth of bacteria

    • by gweihir ( 88907 )

      The problem with genetic algorithms is tat they never produce good results. They usually produce about the worst possible solutions still solving the problem. As AI is not needed for solving any limited real-world problem, genetic algorithms are unable to produce anything like AI. At the same time, genetic algorithms are completely unsuitable to solve any complex problems, because you cannot actually simulate them in practice. You are falling for the "cargo cult science" problem here.

    • Your post boils down to one of the standard "AI" arguments that intelligence is an essentially simple phenomenon that magically emerges when sufficient computer processing power is thrown at it.

      It is an unfalsifiable hypothesis, and therefore outwith the realm of science.

    • interesting... so if your genetic algorithm were written in some "simple" high level language, then the second level 'noodling' would be easier as it would have less potential options to choose from. Thus, it could arrive at the destination in fewer generations, and the destination would be (hopefully) easier for us puny humans to understand.

      This approach means you need higher raw power to run the first and second level algorithms, and as such will need a higher minimum processing power to achieve it. Howev

    • So it all comes down to faith, does it?
      You need to read more Feynman. You are a classic cargo-cultist.
  • “machines that learn to represent the world.” That’s eight words.

    methinks seven :-)

  • Like listening to the preferences users have selected about silly things like what order they want items in their feed listed? I know you love these whiz-bang prediction algorithms, but they suck at predicting what I want. I'm really good at asking for what I want, and changing those settings to what you want will never ever do a better job than letting me pick. I promise.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...