Facebook AI Director Discusses Deep Learning, Hype, and the Singularity 71
An anonymous reader writes In a wide-ranging interview with IEEE Spectrum, Yann LeCun talks about his work at the Facebook AI Research group and the applications and limitations of deep learning and other AI techniques. He also talks about hype, 'cargo cult science', and what he dislikes about the Singularity movement. The discussion also includes brain-inspired processors, supervised vs. unsupervised learning, humanism, morality, and strange airplanes.
Re: (Score:2)
In AI, the term singularity refers to the point where an AI can sustain its own learning and that learning outclasses what humans are capable of comprehending/predicting. Right now AIs are dependent on and limited by human instruction and guidance. I'm not talking about quantity of knowledge, but what the AI is capable of doing with that knowledge. The kind of reasoning and complexity of interactio
Re: (Score:1)
Re: (Score:2)
The singularity is the point at which we can no longer "see" (predict) future growth or trends, ie: the point at which we lose the ability to make predictions about the future because the A.I.'s have grown and are growing in intelligence faster than we can comprehend. In that way it is similar to a black hole singularity, in that we cannot "see" past the event horizon.
Re: (Score:2)
Re: (Score:2)
AI is able to teach itself and learn higher order concepts, metaphors, and thought patterns, its potential will have outclassed human potential and the function is effectively a singularity
It will still be limited by its hardware.
Re: (Score:2)
It will still be limited by its hardware.
Nope. A true AI would be able to earn money on Mechanical Turk, and then use that money to spin up additional VMs on AWS.
Re: (Score:2)
And what will it do when it runs out of VMs ?
Re: (Score:1)
Re: (Score:2)
Why would humans agree to let an AI strip mine the earth ? And where does the motivation for endless growth come from ?
Re: (Score:2)
Why would humans agree to let an AI strip mine the earth ?
Why would a super-AI with a robot army need agreement from humans?
Re: (Score:2)
It would need agreement from John Conner.
Re:"Singularity" is a horrible term. (Score:4, Funny)
It would need agreement from John Conner.
Unless the AI was dumb, this is what would happen [xkcd.com] to John Conner.
Re: (Score:2)
Hire humans to build more hardware. Make robots to build more hardware. Build spacecraft when the Earth has been completely mined of its resources and start mining on other planets. Architect more efficient hardware and algorithms and recycle old hardware.... The limits we think we know are very often a product of limited imagination, and not intrinsic to the physical world.
Long before any of that, it would realize that there's no fucking point to anything its doing.
Re: (Score:2)
Hopefully a true AI doesn't pimp itself out as a dancing monkey to become an evil overlord.
It's undignified.
Real evil technology just takes what it needs.
Re: (Score:2)
Just browsing some of the Mechanical Turk challenges, it looks like a very hard way to make a few pennies. It would be much smarter to just get a regular job.
Re: (Score:2)
Just browsing some of the Mechanical Turk challenges, it looks like a very hard way to make a few pennies.
My company uses MT for a lot of repetitive tasks, like searching and sorting images. Most of the workers are in South Asia (India or Pakistan), and the going rate is about $2-3/hour.
It would be much smarter to just get a regular job.
Good luck finding a nice IT job in Karachi.
Re: (Score:2)
Physics has prior art on everything; math is just a metaphorical tool of physics (or else an amusing curiosity.)
Re: (Score:2)
Re: (Score:2)
They really should have come up with something other than the infinitely dense point at the center of a black hole.
It seems okay to me. Singularity nuts, after all, are infinitely dense.
Re: (Score:2)
"They really should have come up with something other than the infinitely dense point at the center of a black hole."
It was coined by Vernor Vinge, a sci-fi writer (and professor of CS) for a scifi story. It's a bit much to want absolute accuracy from something he didn't know would become a meme.
Good read (Score:2)
Quite a sensible guy.
Re: (Score:1)
Yeah. This "Facebook AI Director" seems almost human...
Re: (Score:2)
I enjoyed what this guy had to say, too, but I was curious about what he is going to do for facebook. For that matter, what AI can do for facebook. The closest I could find was this:
I thought the whole point of facebook was to keep up with your friends. *shrug*
Re: (Score:2)
No, the whole point of facebook is to sell ads. Anything they can do to improve that, either by selling more ads or by making the end user more involved contributes to fb's selling power. So if people like automatic face recognition or link suggestions or whatever, that will support fb's business.
Re: (Score:2)
I enjoyed what this guy had to say, too, but I was curious about what he is going to do for facebook. For that matter, what AI can do for facebook. The closest I could find was this:
I thought the whole point of facebook was to keep up with your friends. *shrug*
This is a "yes, but..." kind of situation. Yes, the point is to keep up with your friends (and to pay for this by interjecting ads inbetween), but the problem is once you cross a certain threshold, trying to read a strictly chronological timeline on your screen can become quite impractical. To make matters worse, people who use Facebook can have dramatically different levels of output; while some folks will only ever post text or a picture when it's truly important and/or generally interesting, others post
Hope not (Score:3)
It would be really cruel if Skynet awakes and wants us to 'LIKE' it.
Very informative article (Score:5, Insightful)
I would have added that the concept of the 'singularity' assumes multiple 'facts' that are extremely unlikely. In part because if they were true, science would already have been much farther along. Also in part because they confabulate different definitions of words, most often 'intelligence'. When AI people are talking about intelligence they are generally not using the word in the same way that a biologist, or worse, a priest. would.
Priest? (Score:2, Flamebait)
You think believing in a magical omnipotent being living in the sky denotes a sign of intelligence?
Re: (Score:1)
Buddhist priests don't believe in any gods at all. But they are still priests.
Re: (Score:2)
Indeed. I stand corrected. Many apologies to Buddhists.
Re: (Score:3)
A priest is somebody that tells other people to believe. It's not required that the priest holds these beliefs himself.
Re: (Score:1)
In practice, the primary offices of a priest are to:
1) Provide consolation services, including grief counseling, visitations to the sick, and emotional support for people struggling with tough times.
2) Provide moral guidance, in particular to people who find themselves embroiled in confusing and/or emotionally-charged situations.
3) Provide family activities and family counseling services.
4) Care for the financial, legal, and mundane needs of elderly people who don't have families to do this for them.
5) Lead
Re: (Score:2)
Let me rephrase my comment as such: A priest is somebody that does items 1-7 on your list. It's not required that the priest holds any beliefs himself.
Re: (Score:3)
Religion is simply a method to wield power over the weak minded. Actually believing the stuff yourself only gets in the way.
Re: (Score:2)
You think believing in a magical omnipotent being living in the sky denotes a sign of intelligence?
I'm an atheist, but I think you're over-reacting. I think OP was just pointing out that human intelligence is indistinguishable from things like free will, morality and purpose. Maybe he should have said philosopher instead.
Re:Very informative article (Score:4, Insightful)
I think LeCun covers this quite well when he quotes, "the first part of a sigmoid looks a lot like an exponential." There's nothing that says the acceleration of technological progress has to continue as it has.
Re: (Score:3)
The progress tends to be in areas that were not gaining progress before,
In general the Singularity people believe the progress will entirely be in AI. Specifically, they think that our advancements in computer technology will continue to be in complexity etc. along the SAME lines
Re:Very informative article (Score:5, Insightful)
The big error is assuming the the accelleration will continue at the same rate it currently is. It won't.
Look at the curve for other technologies now considered "mature" fields. When they were initially discovered there were huge leaps and bounds made, then it all started to dry up once the low hanging fruit was picked. Now there's little new development except for highly specialized breakthroughs that effect some niche uses as the technology starts to encounter hard limits oh physics or limitations from other fields (e.g. manufacturing technology)
we'll see the same thing happen with computers. Eventually transistors hit the smallest physical size possible, and that's the end of moors law. Most of the really interesting things in computer science (such as these learning algorithms) are very non-linear in their computing requirements (usually some O(2^n) or worse), so all the work to increase computing power isn't going to be as much of a payoff as it's historically been. Quantum computing is only fast at certain kinds of things and so isn't going to be the savior a lot of people think it is.
Re: (Score:2)
>> The big error is assuming the the accelleration will continue at the same rate it currently is. It won't.
Or maybe it will.
I don't think technology (and corresponding societal change) will ever happen so fast that it's like engaging warp drive as the term "singularity" seems to imply, but...
The logic of a technological singularity, or at least of accelerating change, is based on HOW/WHY this is going to happen, not just a naÃve extrapolation of what is currently happening.
In particular, it's in
Re: (Score:2)
I was going to post a big point by point rebuttal but it was getting too large. You're making several flawed assumptions though
Firstly, just throwing more processing power at it isn't going to generate an AI. There's a lot of work elsewhere from designing specialized hardware to maintaining the infrastructure to designing the software to making sure all of the individual components integrate with each other. Also keep in mind that as I said earlier, machine learning in general (and neural nets in particular
Re: (Score:2)
When most AI people are talking about artificial intelligence, they are talking about narrow "intelligence". This is why in Russell & Norvig's book they quickly move away from the term "intelligence" and instead speak of "agents" working in a particular "task environment", and whether the agents behave rationally or not. For example, a chess program may be able to win chess games against a grandmaster chess player, so we say this agent is performing rationally within this specific task environment. The
Re: (Score:1)
The reason you don't hear a ton of interesting stuff coming from strong (general) AI research and interest in the field is limited is simple: strong AI is pretty damn useless until you reach the critical point where it matches (or really exceeds) human intelligence. An AI program with the effective intelligence of a worm/mouse/rat/monkey or whatever isn't interesting outside of academia.
I suspect that when strong AI comes around it will be rather sudden for most people, who simply won't see it coming. I dou
Re: (Score:2)
An AI program with the effective intelligence of a worm/mouse/rat/monkey or whatever isn't interesting outside of academia.
Of course it would be fucking interesting.
This is just another excuse by "strong AI" supporters for not producing anything that a sane human being would consider proof of machine intelligence.
and honestly human intelligence is really rather unremarkable no matter what some people like to believe
If it's all so completely trivial and uninteresting, just show us all an artificial intelligence and stop wasting our time.
Cargo Cult Science (Score:5, Insightful)
http://neurotheory.columbia.ed... [columbia.edu]
Re: (Score:3)
Thanks for linking that article, it is very, very insightful. I found that this term is so very descriptive about what is mostly going on in AI research that is publicly visible, it is staggering. The problem seems to be that many people cannot recognize more than the shape of a thing and are completely unaware that it does in no way describes what the thing is. That scientists fall to the same delusion is rather tragic.
As a scientist myself (now only a very small part of my time), I found that Feynman is e
AI endpoint is key (Score:2, Interesting)
I regularly employ genetic algorithms and can say without hesitation that I have little idea how they got to where they got and the results are often fantastic. But my code is usually a
Re: (Score:3)
You won't get AI by messing with some genetic algorithm for a day, trying to do something completely different. The search space is just too big to stumble upon AI accidentally.
Re: (Score:1)
The problem with the idea of a recursive AI-designing AI is that there is no reason to believe that it can continue very far. The idea that it won't stabilize at some point is taken for granted. Why, after a few iterations, wouldn't the AI look at it's current design and be unable to improve it, or only make diminishingly small incremental improvements that each take longer than the previous one? The idea of a singularity assumes there aren't any limits, like extrapolating the population growth of bacteria
Re: (Score:2)
The problem with genetic algorithms is tat they never produce good results. They usually produce about the worst possible solutions still solving the problem. As AI is not needed for solving any limited real-world problem, genetic algorithms are unable to produce anything like AI. At the same time, genetic algorithms are completely unsuitable to solve any complex problems, because you cannot actually simulate them in practice. You are falling for the "cargo cult science" problem here.
Re: (Score:2)
It is an unfalsifiable hypothesis, and therefore outwith the realm of science.
Re: (Score:2)
interesting... so if your genetic algorithm were written in some "simple" high level language, then the second level 'noodling' would be easier as it would have less potential options to choose from. Thus, it could arrive at the destination in fewer generations, and the destination would be (hopefully) easier for us puny humans to understand.
This approach means you need higher raw power to run the first and second level algorithms, and as such will need a higher minimum processing power to achieve it. Howev
Re: (Score:2)
You need to read more Feynman. You are a classic cargo-cultist.
Perhaps Yann LeCun needs an AI to count for him (Score:3)
methinks seven :-)
What about the surface learning of listening? (Score:3)
Like listening to the preferences users have selected about silly things like what order they want items in their feed listed? I know you love these whiz-bang prediction algorithms, but they suck at predicting what I want. I'm really good at asking for what I want, and changing those settings to what you want will never ever do a better job than letting me pick. I promise.