AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat (msn.com) 111
Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S."
When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta itself.
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")
But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.
At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.
In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")
But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.
At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.
In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."
And yet its destructiveness is already on display. (Score:5, Insightful)
Re:And yet its destructiveness is already on displ (Score:4, Insightful)
Re: And yet its destructiveness is already on disp (Score:4, Insightful)
Yes.
And they have already given us an accelerated blue/red division of politics and gigantic echo chambers for antivax, flatearth, and other nonsense.
Re: And yet its destructiveness is already on disp (Score:5, Interesting)
Honestly, I think the (major) search engines and algorithms had tried* to quash antivax and flat earth quite purposefully but can't override the power of social media to spread brain rot. They even managed to amplify the nutters' persecution complexes along the way.
In pre-communication-tech history, people were very used to just adapting to whatever dumb ideas dominated their local community, in early comm-tech(radio, television, newspapers) we had a brief "golden age" where for good and evil, there were gatekeepers who decided what to spread, and then social media allowed people to create communities that spread every dumb idea to every person susceptible to it.
*Admittedly not very hard
Re: And yet its destructiveness is already on disp (Score:5, Insightful)
Also as i heard it crudely described to me is if you were say into, having sex with toasters, that would be somethign you kept to yourself lest if you let the people you know in your community know they would likely say "hey man, that's not cool, you shouldn't do that" but today while those people would stay the same you can just hop online and find a forum of people discussing Emerson vs Sunbeam and updates which one they decided to copulate with this morning.
The internet normalizes behavior that would in the past be discouraged. Like all things internet this is good (folks who need connection that were not able to find it before can) but can also be quite bad (reinfornces behavior and opinions that should not have been).
Re: And yet its destructiveness is already on disp (Score:5, Funny)
What can I say? Toasters are hot.
Re: (Score:2)
I mean, this one kinda is:
The Antique Toaster that's Better than Yours [youtube.com]
Re: (Score:2)
Frakkin' Cylons esp. #3, #6, & #8. ;)
Re: (Score:2, Insightful)
FTFY. The internet, without the people using it, would be nothing but wires. The perverted behavior and normalizing is just people. We do this; just faster, more widely, and easier than it would've been, previously.
Re: (Score:1)
And people did all kinds of modern day "perverted" things all the way back to the 1800s. It was more restricted to the upper classes (tho plenty of middle and lower class people stumbled on it sooner or later). But recorded information about such things was frequently kept secret ( much more successfully ), banned, or suppressed.
And if I had to guess, I'd say the middle class started learning about these things at younger and younger ages in the 1920s to the 1960s.
And recorded information about kinky sex
Re: (Score:2)
It's not just brain rot (Score:1)
So you've got reams of professionally run botnets pushing this sort of thing and then you have some of the major news agencies like Fox more than happy to subtly encourage it.
That said th
Re: (Score:2)
Re: (Score:2)
Re:And yet its destructiveness is already on displ (Score:4, Insightful)
"The single overwhelming application of AI has been, and will be, sabotage of communication. And since communication isn't optional, AI is certainly a threat to humanity. "
No, the "single overwhelming application" is a threat to humanity, not AI.
"The only bullshit part is that it's something new: It's just a scaling of the same garden variety con games used in business since forever."
Right, because the "single overwhelming application" is not new, all that is new is its use of AI.
Modern AI is merely a new, easy, efficient way to access massive amounts of information that, until recently, required skill and access to massive databases. It is not intelligent. It provides access to unlimited trivia easily, it is a revolutionary enabler of the worst types of humans in society.
Re: (Score:2)
No, search engines are entirely driven by user queries, they are passive. "Algorithms", in this context, as active query generators that push narratives.
Re: And yet its destructiveness is already on disp (Score:2)
Search engines include spiders, which are active.
Re: (Score:1)
No, increasingly the search engines have optimized for serving us advertising. Google has become nearly worthless with suspect paid for results interleaved with more obvious advertisements. Googlefoo has been overpowered by advertising too.
Re: (Score:2)
Yes, to answer your question: Search engines WERE the first form of AI.
And now to answer the question that you've yet to ask: Yes, AI is being used to convert freedom into an opinion. To convert Liberty into an act. AI is the Big Brother that we all feared and forgot was growing all along.
Re: (Score:3)
And what titans of industry do we have to thank for these life-changing, world-bettering, productivity-amplifying innovations? Nothing will ever change until there is some mechanism for holding shareholders accountable for the societal harms caused by decisions of companies they invest in.
Re:And yet its destructiveness is already on displ (Score:4, Insightful)
Re: (Score:2)
And, worst of all, people spewing out long, rambling diatribes of word salad to make it look like they're making important points when all they're doing is trying to scare the rabble into doing something they don't understand.
Re: (Score:2)
Problem is, most humans are dumber than a cat. And 100% of those, and 99.9% of those that are smarter than a cat take their brain off the hook and are determined never to use them again after high school. (Assuming they even used the grey matter then).
Server them up results that say "Yes, but" or "No, but" and they never get past the first word.
So, AI or no AI, "whale oil beef hooked".
Re: (Score:2)
It's not that people are dumber. Many are just suckered by lies and don't ask questions. But many are also scared of the truth because they cower from the bullies of the community. They choose instilled "beliefs" over reason even while knowing its bullshit.
Re: (Score:2)
That's where it crosses the line into dangerous. These tools are no longer just being misused by fools, they're being weaponized by malevolent minds against everyone who can remember more than 5 seconds into the past.
Re: (Score:3)
People are not dumb... the problem is education and discernment. Education has been put on the back seat in many countries, just because educated people with a good BS detection system is something most governments don't want, especially when the majority of the people have critical thinking skills, knowledge of how the government works and how they can work within it, like voting if a democracy, or petitions of redress if a Communist state.
It would be nice if we can go back to focusing and putting actual
Re: And yet its destructiveness is already on disp (Score:2)
Being dumbed down makes people dumber.
That it was intentional doesn't mean they aren't dumb. It means it's potentially curable with the right effort.
Re: (Score:2)
I didn't say this so well, but I think it is that people are lazy. Even if they've been told 1000 times "Check the results" ... they'll take the easy way out.
"Computer says no".
Please dont burst the bubble (Score:3)
Re: (Score:1)
power (Score:4, Interesting)
Re: (Score:2)
Re: (Score:3, Insightful)
Found the programmer easily replaced by AI.
The Superconducting Super Collider problem (Score:5, Insightful)
The late Freeman Dyson remarked at the diminishing returns from throwing resources at scaling up a known type of accelerator and hoping meaningful scientific discovery comes out of it. Maybe he was expressing a contrarian opinion about devoting a big chunk of the NSF budget to the eventually cancelled Superconducting Super Collider?
AI has always been about solving problems with exponential complexity with hardware that only grows at a polynomial rate when you build a bigger machine at a current level of technology. Yes, Moore's Law and all that about how hardware has grown exponentially in capability, and this is behind the current AI renaissance, the current hardware is on the cusp of finally doing something useful with AI. But this idea of restarting Three Mile Island and dedicating its electric output to a server farm, can we think a little bit more critically about this?
Think of Eric Schmidt's "We need to destroy the Earth's climate in order to save it" about going Hell-for-leather consuming hydrocarbon fuel to meet an exponential growth curve in computing power consumption so the AI will come up with the solution for Climate Change.
For the faction of Slashdotters who regard Climate Change as overhyped as AI, current levels of CO2 emissions may not be the problem many say they are, but certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? Even the CEO of TSMC was rolling his eyes at the AI people wanting to spend trillions with a capital T on more fabs to keep up with projected AI demand.
Re: (Score:2)
Re: The Superconducting Super Collider problem (Score:4, Insightful)
I think the investments are not large enough to massively damage world economy, but since none of the big investors will recover their investments, some of them will suffer and some will die. Wouldn't it be funny if AI is what finally kills Microsoft?
Re: (Score:2)
Re: (Score:2)
Sure. But as these idiotic investments are strongly localized to just a few players, I still expect impact on the world economy will be small. For anybody hit, they will probably be devastating, but greed and stupidity (both strong factors here) come at a price if you let them drive your decisions. So zero pity from me for these people.
Incidentally, I recently tried to use ChatGPT as "better search" and even for that it is not very good. I asked for sources on several things and what I got was pathetic. The
Re: The Superconducting Super Collider problem (Score:2)
In your scenario a lot of people die. That also makes real estate cheaper but it's usually considered inhuman to propose it as a solution, which presumably is the reason for your cowardice.
Re: (Score:2)
"...certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? "
We don't want to do either of these things. Frankly, LLMs are a proof of concept, I don't want them "developed" further at the current time. Efforts need to shift to simulating intelligence, not increasing the amount of knowledge even larger LLMs can integrate. But Altman and Musk are more interested in money.
"...AI people wanting to spend trillions with a capital T on more fa
Re: The Superconducting Super Collider problem (Score:2)
Technomopia (Score:1)
The idea that we need to destroy the planet to build an AI that'll tell us how to not destroy the planet comes from the mind of someone who can only see solutions in terms of computers.
Scientists will tell you that the actual solution is already there it's just lots and lots of wind and solar Plus a smattering of batteries. Also maybe moving away from suburbs and cars and back to walkable cities and public trans
This troll follows the same structure (Score:2)
AI, again (Score:1)
Re: (Score:2)
Is there any way to filter out those annoying AI news?
Simply scroll past AI stories.
We all know it's the hype until that bubble finally bursts, but it's getting very old very quickly.
It's not all hype. LLMs are extremely useful for a lot of use cases.
Since when are cats.. (Score:1)
Re: (Score:2)
ELIZA was said to impersonate humans too.
Re: Since when are cats.. (Score:1)
Re: (Score:1)
It's also kind of strange that you refer to cats and mention "fellow human being."
Are you a mouse? You do a good job impersonating a human, nice job typing.
Re: (Score:2)
One threat is those that use it (Score:5, Interesting)
I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.
Re: One threat is those that use it (Score:2)
I completely agree.
Re: (Score:2)
I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.
This here is the real threat, yes. We desperately need to stop with the idea that "everyone should learn to code" or sending people to "code camps" and related nonsense. Software engineering and programming are very hard disciplines. Any idiot can string together enough scripting language nubbins to be dangerous, but writing reliable, deterministic software is difficult and complex. Software already faces problems in that it is not as serious as a discipline as say, electrical engineering or mechanical engi
Re: (Score:2)
I won't disagree. That crappy AI generated code is mostly the result of GIGO IMHO.
The singularity (Score:1)
"Only if we meet them half-way." - Dave Snowden
Sounds like Meta, et al., are desperate to become the dumbest guys in the room.
He is not wrong (Score:2)
Even for a really dumb cat.
The problem is people thinking it's intelligent (Score:5, Insightful)
So they start using it to make reports and reply emails. Then they make it participate en business strategy and take risky financial decision. And that's the real threat.
Re: (Score:3)
They think that because it's obviously more intelligent than them.
They just don't have or can't keep up with a good role model.
Current threat from AI (Score:2)
I think AI is already a tool that is being used by threat actors
Imagine a phone call from your child. Phone number is spoofed and gets right through because it's the correct number, the voice sounds enough like your child that the distress in the voice bypasses all reasoning in your mind. Actual photos of your child are altered and sent showing them in a distressing situation. I think this type of threat would fool a lot of people if it was timed right.
Cats arn't that dumb (Score:2)
And with the intelligence they've got in the right enviroment they can be lethal. If you don't believe feel free to wander the africa savanna unarmed.
LeCun has been saying this for a while. (Score:3)
Re: (Score:3)
It is common courtesy to politly ignore old professors when they go senile. Too bad the news aren't that polite.
Theoretical AI is deadly ChatGPT Is autocomplete (Score:4, Insightful)
I am personally glad LeCun has been saying what I've been saying since I first played with ChatGPT and copilot...I can't say they're "useless" but I am confident that they're too error-riddled to replace a human worker for any job you're willing to pay to have done today.
Cats are evil! (Score:3)
AI as a human-replacement intelligence (Score:2)
AI, currently, is dumb as a box of hammers. It is amazing in its ability to mimic some of the output of a talented human being while completely lacking any kind of intelligence.
However, human brains aren't made of magic, and there's no reason to believe that our intelligence is anything other than the results of an incredibly complex web of patterns resulting from fairly basic stimulus and response chains.
Eventually, we're going to make a true AGI. I suspect there will be some hardware development require
The true danger are evil people with AI... (Score:3)
Someone faking your election...
Someone using it to scam you...
Someone putting it on flying bombs to bomb cities and hospitals...
AI isn't smart but people are stupid (Score:2)
The problem at the moment isn't so much that AI is "smart."
The problem is all the people who think it is. It's being moved from a decision-support role into into a decision-making one. Whoops.
AI is bad for Palestinians (Score:2)
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza [972mag.com]
This person has never watched a cat video (Score:2)
It doesn't take intelligence to be destructive.
And given that AI is taught on media full of cats and people being destructive for "Lulz", its almost cetain LLMs will do horrible things.
AI is not the problem (Score:2)
The problem is people who use AI
Today's AI is mostly harmless and useless. Tomorrow's AI will be a powerful weapon in the hands of bad people
We need effective defenses
Rogue AI isn't the issue (Score:4, Interesting)
Folks are focused on chatbots and call centers, maybe programmers, but we're seeing lots and lots of other applications, like advanced manufacturing robots, self driving cars, etc.
Part of that is we got so excited with LLMs that we forgot about general purpose ML.
We're nowhere's near ready for what's coming. It's another Industrial Revolution.
Everyone remembers the luddites as just loons that impeded progress.
Fact is, during both industrial revolutions we had *decades* of technological unemployment before other new tech caught up and got us back to where we are now.
Historically an Industrial Revolution is like any other: bloody as hell. WWI & WWII weren't accidents. There were high tensions due to high unemployment rates and lots of politicians looking for something to do with the their excess populations they didn't need or want.
So what he's saying (Score:2)
Is a cat is smarter than something created by humans. Good to know.
I wholly accept our fuzzy feline overlords.
He's wrong; a cat is smarter by far (Score:3, Insightful)
Also, some countries (like China, as an example) will not hesitate to use AI in military applications where it has control of weapons systems, leading to what will amount to war crimes.
It is dumber (Score:2)
It got a 'memory' recently, apparently, although it never uses it.
I told it about 250 times (not exaggerating) not to use numbering, bullets or subtitles in the answers it gives me, it can't do that for more than 3 replies, before completely forgetting it and using all 3.
It doesn't matter how "intelligent" AI is. (Score:3)
What matters is what we allow it to do without requiring human intervention.
slightly right, so very wrong (Score:2)
I think he's probably right that we don't need to yet worry about "ultra-intelligent" AI.
But I'd also be worried about cats if Silicon Valley idiots were pouring billions of dollars into making them smarter and giving them access to basically everything they can think of.
Being "dumb as a cat" isn't a virtue, nor is it a reason to ignore something. A coronavirus is way dumber than a cat, and yet...
Intelligence is smart (Score:2)
None of this scarequote "dumbness" about how many r's are in strawberry degrades its smartness on actual complex tasks it gives accurate answers for.
And a lot of the attempts to describe AI as dumb are
Humans (Score:2)
Humans are also dumber than cats. So we still do not know if LLM may be smarter than humans. Looking at some of them (humans) I suspect the LLMs are.
What nonsense. (Score:2)
AI is *already* taking jobs. Because while it may have the cognition of a flatworm, it has the intelligence and skills of a college graduate.
We're losing about 4,000 jobs in the U.S. this year. But it could easily be 10,000 jobs per month next year.
And ... "AI could impact 40% of working hours and create or destroy millions of jobs by 2027, according to reports from Accenture".
It's not just the A.I. "replacing" human workers, but the A.I. letting one human worker produce the output of three or four huma
Re: (Score:2)
If your job can be replaced by an AI today then your job really wasn't that important to begin with.
Re: (Score:1)
If you could be replaced by another human being, then A.I. will be probably be able to replace you within a decade.
That's not all jobs. But it is most jobs.
Dumber than cats? (Score:1)
Content Moderation as a "good" example?? (Score:2)
META's Content Moderation is so ridiculously terrible.. It relentlessly blocks benign content and fails to block actual nefarious content consistently. The fact he purports that as a positive example of what their AI tech can do might be the best indication of why his opinion of AI isn't worth listening to..
Disclaimer: I am absolutely not against AI.. I say let it come and enjoy the ride. I am however *adamantly opposed to the torrent of shit software being released and blowhards like this guy wasting bandw
We live in a high budget 2000s Disaster Movie (Score:1)
Only because people believe it (Score:1)
That's pretty .. (Score:2)
The Cats are unforgiving (Score:2)
Cats aren't dumb (Score:2)
Not only are cats not "dumb", they arguably *domesticated humans* and used us to conquer the world.
Ask yourself: when the day arrives that humans have successfully colonized Mars, which animal do you think will become the first to be deliberately taken to Mars to become the mother (via IVF) of countless future companions for colonists?
Yep. Cats.
AI can't replace humans for one reason... (Score:2)
AI can't replace humans so long as humans still have legal liability for what they produce. Consider the AI lawyer for example. Sure, and AI can produce an output that looks like a legal brief, but a human still has to spend basically just as much time reviewing and re-writing the output as it would have taken for the lawyer to just write it themselves. Sure AI will get better, and do a better job with accuracy, but the human will never be able to fully trust it. That means that every line will always have