Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Facebook

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat (msn.com) 87

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S." When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta itself.
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.

At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat

Comments Filter:
  • by Eunomion ( 8640039 ) on Monday October 14, 2024 @07:53AM (#64862937)
    The algorithms that now dictate how most people get information already behave like Orwell's Ministry of Truth, conjuring pure fiction out of the ether while banning and erasing history (even recent history) as taboo subjects. This kind of reality-neutral and consequence-neutral "amplification" product is exactly what AI systems seek to perfect, and it's basically just a form of bomb.
    • by paralumina01 ( 6276944 ) on Monday October 14, 2024 @07:57AM (#64862945)
      Isn't that basically what search engines have been doing for the past 15 or more years?
      • by MrNaz ( 730548 ) on Monday October 14, 2024 @08:07AM (#64862957) Homepage

        Yes.
        And they have already given us an accelerated blue/red division of politics and gigantic echo chambers for antivax, flatearth, and other nonsense.

        • by i kan reed ( 749298 ) on Monday October 14, 2024 @08:54AM (#64863055) Homepage Journal

          Honestly, I think the (major) search engines and algorithms had tried* to quash antivax and flat earth quite purposefully but can't override the power of social media to spread brain rot. They even managed to amplify the nutters' persecution complexes along the way.

          In pre-communication-tech history, people were very used to just adapting to whatever dumb ideas dominated their local community, in early comm-tech(radio, television, newspapers) we had a brief "golden age" where for good and evil, there were gatekeepers who decided what to spread, and then social media allowed people to create communities that spread every dumb idea to every person susceptible to it.

          *Admittedly not very hard

      • Not unless there was a search engine that dynamically generated counterfeit content in top rankings to deceive people, the way news feeds have become. The single overwhelming application of AI has been, and will be, sabotage of communication. And since communication isn't optional, AI is certainly a threat to humanity. The only bullshit part is that it's something new: It's just a scaling of the same garden variety con games used in business since forever.
        • by dfghjk ( 711126 ) on Monday October 14, 2024 @08:37AM (#64862993)

          "The single overwhelming application of AI has been, and will be, sabotage of communication. And since communication isn't optional, AI is certainly a threat to humanity. "

          No, the "single overwhelming application" is a threat to humanity, not AI.

          "The only bullshit part is that it's something new: It's just a scaling of the same garden variety con games used in business since forever."

          Right, because the "single overwhelming application" is not new, all that is new is its use of AI.

          Modern AI is merely a new, easy, efficient way to access massive amounts of information that, until recently, required skill and access to massive databases. It is not intelligent. It provides access to unlimited trivia easily, it is a revolutionary enabler of the worst types of humans in society.

      • by dfghjk ( 711126 )

        No, search engines are entirely driven by user queries, they are passive. "Algorithms", in this context, as active query generators that push narratives.

      • No, increasingly the search engines have optimized for serving us advertising. Google has become nearly worthless with suspect paid for results interleaved with more obvious advertisements. Googlefoo has been overpowered by advertising too.

    • And what titans of industry do we have to thank for these life-changing, world-bettering, productivity-amplifying innovations? Nothing will ever change until there is some mechanism for holding shareholders accountable for the societal harms caused by decisions of companies they invest in.

      • by Eunomion ( 8640039 ) on Monday October 14, 2024 @09:31AM (#64863137)
        We arrive at the very simple solution to pretty much all the great harms society endures today: Ending the corporation as a legal fiction (or, really, fictional law) and operating strictly by personal accountability. No more shell games, dodging fines and lawsuit damages by shuffling paper. No more double rights for the corporate owner class.
    • Problem is, most humans are dumber than a cat. And 100% of those, and 99.9% of those that are smarter than a cat take their brain off the hook and are determined never to use them again after high school. (Assuming they even used the grey matter then).

      Server them up results that say "Yes, but" or "No, but" and they never get past the first word.

      So, AI or no AI, "whale oil beef hooked".

      • by evanh ( 627108 )

        It's not that people are dumber. Many are just suckered by lies and don't ask questions. But many are also scared of the truth because they cower from the bullies of the community. They choose instilled "beliefs" over reason even while knowing its bullshit.

      • Public stupidity is exploited, but it isn't The Problem. When I block false and malicious content from feeds, the algorithm just "retaliates" by amplifying it even further: Like it was designed to actively punish critical awareness, not merely reward stupidity.

        That's where it crosses the line into dangerous. These tools are no longer just being misused by fools, they're being weaponized by malevolent minds against everyone who can remember more than 5 seconds into the past.
      • People are not dumb... the problem is education and discernment. Education has been put on the back seat in many countries, just because educated people with a good BS detection system is something most governments don't want, especially when the majority of the people have critical thinking skills, knowledge of how the government works and how they can work within it, like voting if a democracy, or petitions of redress if a Communist state.

        It would be nice if we can go back to focusing and putting actual

    • I have not sold my AI and Nvidia stock yet
  • power (Score:4, Interesting)

    by phantomfive ( 622387 ) on Monday October 14, 2024 @08:07AM (#64862955) Journal
    A for loop is an extremely powerful computational tool, it has made $trillions$ for humanity.
  • by Latent Heat ( 558884 ) on Monday October 14, 2024 @08:09AM (#64862959)

    The late Freeman Dyson remarked at the diminishing returns from throwing resources at scaling up a known type of accelerator and hoping meaningful scientific discovery comes out of it. Maybe he was expressing a contrarian opinion about devoting a big chunk of the NSF budget to the eventually cancelled Superconducting Super Collider?

    AI has always been about solving problems with exponential complexity with hardware that only grows at a polynomial rate when you build a bigger machine at a current level of technology. Yes, Moore's Law and all that about how hardware has grown exponentially in capability, and this is behind the current AI renaissance, the current hardware is on the cusp of finally doing something useful with AI. But this idea of restarting Three Mile Island and dedicating its electric output to a server farm, can we think a little bit more critically about this?

    Think of Eric Schmidt's "We need to destroy the Earth's climate in order to save it" about going Hell-for-leather consuming hydrocarbon fuel to meet an exponential growth curve in computing power consumption so the AI will come up with the solution for Climate Change.

    For the faction of Slashdotters who regard Climate Change as overhyped as AI, current levels of CO2 emissions may not be the problem many say they are, but certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? Even the CEO of TSMC was rolling his eyes at the AI people wanting to spend trillions with a capital T on more fabs to keep up with projected AI demand.

    • I think you are on the right line of thought; the AI bubble can only go so far before investors demand some tangible results from the billions/trillions they have been throwing at it. I’m really hoping the AI craze does not tank the world economy when it pops like the .com bubble.
    • by dfghjk ( 711126 )

      "...certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? "

      We don't want to do either of these things. Frankly, LLMs are a proof of concept, I don't want them "developed" further at the current time. Efforts need to shift to simulating intelligence, not increasing the amount of knowledge even larger LLMs can integrate. But Altman and Musk are more interested in money.

      "...AI people wanting to spend trillions with a capital T on more fa

    • Plants are still CO2 restricted, but I have a few ferns in my house, so I am ready to make new coal once it ticked up high enough
  • Is there any way to filter out those annoying AI news? We all know it's the hype until that bubble finally bursts, but it's getting very old very quickly.
    • by Teckla ( 630646 )

      Is there any way to filter out those annoying AI news?

      Simply scroll past AI stories.

      We all know it's the hype until that bubble finally bursts, but it's getting very old very quickly.

      It's not all hype. LLMs are extremely useful for a lot of use cases.

  • ..able to impersonate a fellow human being of average intelligence? He/she might be a professor, but logically this reasoning does niet seem to make much sense.
  • by Drethon ( 1445051 ) on Monday October 14, 2024 @08:44AM (#64863023)

    I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.

    • I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.

      This here is the real threat, yes. We desperately need to stop with the idea that "everyone should learn to code" or sending people to "code camps" and related nonsense. Software engineering and programming are very hard disciplines. Any idiot can string together enough scripting language nubbins to be dangerous, but writing reliable, deterministic software is difficult and complex. Software already faces problems in that it is not as serious as a discipline as say, electrical engineering or mechanical engi

  • "One day, machines will exceed human intelligence." - Ray Kurzweil

    "Only if we meet them half-way." - Dave Snowden

    Sounds like Meta, et al., are desperate to become the dumbest guys in the room.
  • Even for a really dumb cat.

  • by doragasu ( 2717547 ) on Monday October 14, 2024 @09:02AM (#64863065)

    So they start using it to make reports and reply emails. Then they make it participate en business strategy and take risky financial decision. And that's the real threat.

    • They think that because it's obviously more intelligent than them.
      They just don't have or can't keep up with a good role model.

  • I think AI is already a tool that is being used by threat actors

    Imagine a phone call from your child. Phone number is spoofed and gets right through because it's the correct number, the voice sounds enough like your child that the distress in the voice bypasses all reasoning in your mind. Actual photos of your child are altered and sent showing them in a distressing situation. I think this type of threat would fool a lot of people if it was timed right.

  • And with the intelligence they've got in the right enviroment they can be lethal. If you don't believe feel free to wander the africa savanna unarmed.

  • by JoshuaZ ( 1134087 ) on Monday October 14, 2024 @09:38AM (#64863155) Homepage
    LeCun has been saying things like this for a while, and it is striking that he has not grappled much with the people who disagree with him, and has mostly responded with a mix of condescension and incivility, even when they are often other highly accomplished people. Meanwhile, Hinton who shared the prize with LeCun has become deeply concerned about AI risk. See https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/ [cbsnews.com] and https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai [mit.edu]. And it is worth noting that Hinton became convinced of these concerns, by actually going and looking at the arguments people have made. At this point, I'm not sure what would possibly convince LeCun that AI is a threat that wasn't immediately fatal to most of humanity.
    • It is common courtesy to politly ignore old professors when they go senile. Too bad the news aren't that polite.

    • LeCun is on the money in regards to today's AI. Generative AI is fancy autocomplete. It's not the Matrix or Terminator. There are dangers with Generative AI, but not in the ability to outsmart us, but the ability to scam us. It helps scammers write better spam....it's too inaccurate to replace real workers, but if it gets a little better it can generate content "close enough" to reality to fool your grandma into investing into fake companies, for example. It will enable a new era of grift. It will gen
  • by ZenDragon ( 1205104 ) on Monday October 14, 2024 @09:54AM (#64863187)
    If AI ever gets as smart as a cat we are all doomed. They've been trying to kill us for centuries!
  • AI, currently, is dumb as a box of hammers. It is amazing in its ability to mimic some of the output of a talented human being while completely lacking any kind of intelligence.

    However, human brains aren't made of magic, and there's no reason to believe that our intelligence is anything other than the results of an incredibly complex web of patterns resulting from fairly basic stimulus and response chains.

    Eventually, we're going to make a true AGI. I suspect there will be some hardware development require

  • by Lavandera ( 7308312 ) on Monday October 14, 2024 @10:24AM (#64863259)

    Someone faking your election...

    Someone using it to scam you...

    Someone putting it on flying bombs to bomb cities and hospitals...

  • The problem at the moment isn't so much that AI is "smart."

    The problem is all the people who think it is. It's being moved from a decision-support role into into a decision-making one. Whoops.

  • It doesn't take intelligence to be destructive.

    And given that AI is taught on media full of cats and people being destructive for "Lulz", its almost cetain LLMs will do horrible things.

  • The problem is people who use AI
    Today's AI is mostly harmless and useless. Tomorrow's AI will be a powerful weapon in the hands of bad people
    We need effective defenses

  • by rsilvergun ( 571051 ) on Monday October 14, 2024 @11:39AM (#64863403)
    it's AI that works. Specifically AI that works and suddenely replaces 20-30% of all workers.

    Folks are focused on chatbots and call centers, maybe programmers, but we're seeing lots and lots of other applications, like advanced manufacturing robots, self driving cars, etc.

    Part of that is we got so excited with LLMs that we forgot about general purpose ML.

    We're nowhere's near ready for what's coming. It's another Industrial Revolution.

    Everyone remembers the luddites as just loons that impeded progress.

    Fact is, during both industrial revolutions we had *decades* of technological unemployment before other new tech caught up and got us back to where we are now.

    Historically an Industrial Revolution is like any other: bloody as hell. WWI & WWII weren't accidents. There were high tensions due to high unemployment rates and lots of politicians looking for something to do with the their excess populations they didn't need or want.
  • Is a cat is smarter than something created by humans. Good to know.

    I wholly accept our fuzzy feline overlords.

  • by Rick Schumann ( 4662797 ) on Monday October 14, 2024 @11:58AM (#64863441) Journal
    The real danger from so-called 'AI' is that people will be fooled into believing it is 'smart', rely on it too much, believe it too much, then disasters -- up to and including people getting killed -- will happen.
    Also, some countries (like China, as an example) will not hesitate to use AI in military applications where it has control of weapons systems, leading to what will amount to war crimes.
  • It got a 'memory' recently, apparently, although it never uses it.

    I told it about 250 times (not exaggerating) not to use numbering, bullets or subtitles in the answers it gives me, it can't do that for more than 3 replies, before completely forgetting it and using all 3.

  • What matters is what we allow it to do without requiring human intervention.

  • I think he's probably right that we don't need to yet worry about "ultra-intelligent" AI.

    But I'd also be worried about cats if Silicon Valley idiots were pouring billions of dollars into making them smarter and giving them access to basically everything they can think of.

    Being "dumb as a cat" isn't a virtue, nor is it a reason to ignore something. A coronavirus is way dumber than a cat, and yet...

  • AI can't be dumber than a cat, because I recently pasted two translations of a Classical Japanese text, one of my own and one via another translator and the AI accurately described some of the methods I was using and what differed about them or what it liked about them compaired to the other translator.

    None of this scarequote "dumbness" about how many r's are in strawberry degrades its smartness on actual complex tasks it gives accurate answers for.

    And a lot of the attempts to describe AI as dumb are
  • Humans are also dumber than cats. So we still do not know if LLM may be smarter than humans. Looking at some of them (humans) I suspect the LLMs are.

  • AI is *already* taking jobs. Because while it may have the cognition of a flatworm, it has the intelligence and skills of a college graduate.

    We're losing about 4,000 jobs in the U.S. this year. But it could easily be 10,000 jobs per month next year.

    And ... "AI could impact 40% of working hours and create or destroy millions of jobs by 2027, according to reports from Accenture".

    It's not just the A.I. "replacing" human workers, but the A.I. letting one human worker produce the output of three or four huma

  • That's a tall order, I don't think there's a single brain cell between our orange and black cats. They manage to get both of them trapped in an upside down garbage can on a weekly basis. Which takes talent because Angry orange is almost as big as the can...

There must be more to life than having everything. -- Maurice Sendak

Working...