Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Facebook

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat (msn.com) 111

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S." When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta itself.
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.

At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."
This discussion has been archived. No new comments can be posted.

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat

Comments Filter:
  • by Eunomion ( 8640039 ) on Monday October 14, 2024 @06:53AM (#64862937)
    The algorithms that now dictate how most people get information already behave like Orwell's Ministry of Truth, conjuring pure fiction out of the ether while banning and erasing history (even recent history) as taboo subjects. This kind of reality-neutral and consequence-neutral "amplification" product is exactly what AI systems seek to perfect, and it's basically just a form of bomb.
    • by paralumina01 ( 6276944 ) on Monday October 14, 2024 @06:57AM (#64862945)
      Isn't that basically what search engines have been doing for the past 15 or more years?
      • by MrNaz ( 730548 ) on Monday October 14, 2024 @07:07AM (#64862957) Homepage

        Yes.
        And they have already given us an accelerated blue/red division of politics and gigantic echo chambers for antivax, flatearth, and other nonsense.

        • by i kan reed ( 749298 ) on Monday October 14, 2024 @07:54AM (#64863055) Homepage Journal

          Honestly, I think the (major) search engines and algorithms had tried* to quash antivax and flat earth quite purposefully but can't override the power of social media to spread brain rot. They even managed to amplify the nutters' persecution complexes along the way.

          In pre-communication-tech history, people were very used to just adapting to whatever dumb ideas dominated their local community, in early comm-tech(radio, television, newspapers) we had a brief "golden age" where for good and evil, there were gatekeepers who decided what to spread, and then social media allowed people to create communities that spread every dumb idea to every person susceptible to it.

          *Admittedly not very hard

          • by jacks smirking reven ( 909048 ) on Monday October 14, 2024 @08:08AM (#64863069)

            Also as i heard it crudely described to me is if you were say into, having sex with toasters, that would be somethign you kept to yourself lest if you let the people you know in your community know they would likely say "hey man, that's not cool, you shouldn't do that" but today while those people would stay the same you can just hop online and find a forum of people discussing Emerson vs Sunbeam and updates which one they decided to copulate with this morning.

            The internet normalizes behavior that would in the past be discouraged. Like all things internet this is good (folks who need connection that were not able to find it before can) but can also be quite bad (reinfornces behavior and opinions that should not have been).

          • The more the search engines try to suppress nonsense, the more determined the nonsense-spreaders become.
          • We know perfectly well that our enemies are purposefully pushing anti-vax and other conspiracies because they destabilize and hurt our country. Even relatively innocuous conspiracy theories like flat earth do that because they act as gateways into the communities with the more destructive conspiracies like anti-vax.

            So you've got reams of professionally run botnets pushing this sort of thing and then you have some of the major news agencies like Fox more than happy to subtly encourage it.

            That said th
      • Not unless there was a search engine that dynamically generated counterfeit content in top rankings to deceive people, the way news feeds have become. The single overwhelming application of AI has been, and will be, sabotage of communication. And since communication isn't optional, AI is certainly a threat to humanity. The only bullshit part is that it's something new: It's just a scaling of the same garden variety con games used in business since forever.
        • by dfghjk ( 711126 ) on Monday October 14, 2024 @07:37AM (#64862993)

          "The single overwhelming application of AI has been, and will be, sabotage of communication. And since communication isn't optional, AI is certainly a threat to humanity. "

          No, the "single overwhelming application" is a threat to humanity, not AI.

          "The only bullshit part is that it's something new: It's just a scaling of the same garden variety con games used in business since forever."

          Right, because the "single overwhelming application" is not new, all that is new is its use of AI.

          Modern AI is merely a new, easy, efficient way to access massive amounts of information that, until recently, required skill and access to massive databases. It is not intelligent. It provides access to unlimited trivia easily, it is a revolutionary enabler of the worst types of humans in society.

      • by dfghjk ( 711126 )

        No, search engines are entirely driven by user queries, they are passive. "Algorithms", in this context, as active query generators that push narratives.

      • No, increasingly the search engines have optimized for serving us advertising. Google has become nearly worthless with suspect paid for results interleaved with more obvious advertisements. Googlefoo has been overpowered by advertising too.

      • Yes, to answer your question: Search engines WERE the first form of AI.

        And now to answer the question that you've yet to ask: Yes, AI is being used to convert freedom into an opinion. To convert Liberty into an act. AI is the Big Brother that we all feared and forgot was growing all along.

    • And what titans of industry do we have to thank for these life-changing, world-bettering, productivity-amplifying innovations? Nothing will ever change until there is some mechanism for holding shareholders accountable for the societal harms caused by decisions of companies they invest in.

      • by Eunomion ( 8640039 ) on Monday October 14, 2024 @08:31AM (#64863137)
        We arrive at the very simple solution to pretty much all the great harms society endures today: Ending the corporation as a legal fiction (or, really, fictional law) and operating strictly by personal accountability. No more shell games, dodging fines and lawsuit damages by shuffling paper. No more double rights for the corporate owner class.
    • Problem is, most humans are dumber than a cat. And 100% of those, and 99.9% of those that are smarter than a cat take their brain off the hook and are determined never to use them again after high school. (Assuming they even used the grey matter then).

      Server them up results that say "Yes, but" or "No, but" and they never get past the first word.

      So, AI or no AI, "whale oil beef hooked".

      • by evanh ( 627108 )

        It's not that people are dumber. Many are just suckered by lies and don't ask questions. But many are also scared of the truth because they cower from the bullies of the community. They choose instilled "beliefs" over reason even while knowing its bullshit.

      • Public stupidity is exploited, but it isn't The Problem. When I block false and malicious content from feeds, the algorithm just "retaliates" by amplifying it even further: Like it was designed to actively punish critical awareness, not merely reward stupidity.

        That's where it crosses the line into dangerous. These tools are no longer just being misused by fools, they're being weaponized by malevolent minds against everyone who can remember more than 5 seconds into the past.
      • People are not dumb... the problem is education and discernment. Education has been put on the back seat in many countries, just because educated people with a good BS detection system is something most governments don't want, especially when the majority of the people have critical thinking skills, knowledge of how the government works and how they can work within it, like voting if a democracy, or petitions of redress if a Communist state.

        It would be nice if we can go back to focusing and putting actual

      • I didn't say this so well, but I think it is that people are lazy. Even if they've been told 1000 times "Check the results" ... they'll take the easy way out.

        "Computer says no".

    • I have not sold my AI and Nvidia stock yet
    • We have algorithms less intelligent than a rock setting the world on fire around us. I'm much less worried about a fully formed and realized artificial intelligence, than I am about what happens before that... when they are as smart as a raptor (bird, not dinosaur), cat, toddler, and teenager.
  • power (Score:4, Interesting)

    by phantomfive ( 622387 ) on Monday October 14, 2024 @07:07AM (#64862955) Journal
    A for loop is an extremely powerful computational tool, it has made $trillions$ for humanity.
  • by Latent Heat ( 558884 ) on Monday October 14, 2024 @07:09AM (#64862959)

    The late Freeman Dyson remarked at the diminishing returns from throwing resources at scaling up a known type of accelerator and hoping meaningful scientific discovery comes out of it. Maybe he was expressing a contrarian opinion about devoting a big chunk of the NSF budget to the eventually cancelled Superconducting Super Collider?

    AI has always been about solving problems with exponential complexity with hardware that only grows at a polynomial rate when you build a bigger machine at a current level of technology. Yes, Moore's Law and all that about how hardware has grown exponentially in capability, and this is behind the current AI renaissance, the current hardware is on the cusp of finally doing something useful with AI. But this idea of restarting Three Mile Island and dedicating its electric output to a server farm, can we think a little bit more critically about this?

    Think of Eric Schmidt's "We need to destroy the Earth's climate in order to save it" about going Hell-for-leather consuming hydrocarbon fuel to meet an exponential growth curve in computing power consumption so the AI will come up with the solution for Climate Change.

    For the faction of Slashdotters who regard Climate Change as overhyped as AI, current levels of CO2 emissions may not be the problem many say they are, but certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? Even the CEO of TSMC was rolling his eyes at the AI people wanting to spend trillions with a capital T on more fabs to keep up with projected AI demand.

    • I think you are on the right line of thought; the AI bubble can only go so far before investors demand some tangible results from the billions/trillions they have been throwing at it. I’m really hoping the AI craze does not tank the world economy when it pops like the .com bubble.
      • by gweihir ( 88907 ) on Monday October 14, 2024 @08:12AM (#64863085)

        I think the investments are not large enough to massively damage world economy, but since none of the big investors will recover their investments, some of them will suffer and some will die. Wouldn't it be funny if AI is what finally kills Microsoft?

        • I think the investment is way larger than we might guess. Entire old power plants have been bought and brought back online to fuel the ai craze. There is talk of re-starting 3-mile island, just to power server farms. These are *huge* investments that reach further into the ‘normal’ economy than the .com era ever did. The Contractors, engineers, and workers involved in these massive projects will be left holding the bag when the ai craze flops, and that will have far-reaching impacts.
          • by gweihir ( 88907 )

            Sure. But as these idiotic investments are strongly localized to just a few players, I still expect impact on the world economy will be small. For anybody hit, they will probably be devastating, but greed and stupidity (both strong factors here) come at a price if you let them drive your decisions. So zero pity from me for these people.

            Incidentally, I recently tried to use ChatGPT as "better search" and even for that it is not very good. I asked for sources on several things and what I got was pathetic. The

    • by dfghjk ( 711126 )

      "...certainly we don't want to greatly increase the rate of CO2 emissions and do we want to write a blank check to build out AI? "

      We don't want to do either of these things. Frankly, LLMs are a proof of concept, I don't want them "developed" further at the current time. Efforts need to shift to simulating intelligence, not increasing the amount of knowledge even larger LLMs can integrate. But Altman and Musk are more interested in money.

      "...AI people wanting to spend trillions with a capital T on more fa

    • Plants are still CO2 restricted, but I have a few ferns in my house, so I am ready to make new coal once it ticked up high enough
    • Or to put it in simpler terms when you're only tool is a hammer every problem is a nail.

      The idea that we need to destroy the planet to build an AI that'll tell us how to not destroy the planet comes from the mind of someone who can only see solutions in terms of computers.

      Scientists will tell you that the actual solution is already there it's just lots and lots of wind and solar Plus a smattering of batteries. Also maybe moving away from suburbs and cars and back to walkable cities and public trans
  • Is there any way to filter out those annoying AI news? We all know it's the hype until that bubble finally bursts, but it's getting very old very quickly.
    • by Teckla ( 630646 )

      Is there any way to filter out those annoying AI news?

      Simply scroll past AI stories.

      We all know it's the hype until that bubble finally bursts, but it's getting very old very quickly.

      It's not all hype. LLMs are extremely useful for a lot of use cases.

  • ..able to impersonate a fellow human being of average intelligence? He/she might be a professor, but logically this reasoning does niet seem to make much sense.
  • by Drethon ( 1445051 ) on Monday October 14, 2024 @07:44AM (#64863023)

    I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.

    • I think a significant threat is going to be people that use LLMs to generate code without understanding it well enough to check for errors. We have enough trouble with people that don't understand memory management, proper input validation, error handling, etc. I'm fairly certain that AI generated code will lead to a whole new wave of insecure code.

      This here is the real threat, yes. We desperately need to stop with the idea that "everyone should learn to code" or sending people to "code camps" and related nonsense. Software engineering and programming are very hard disciplines. Any idiot can string together enough scripting language nubbins to be dangerous, but writing reliable, deterministic software is difficult and complex. Software already faces problems in that it is not as serious as a discipline as say, electrical engineering or mechanical engi

  • "One day, machines will exceed human intelligence." - Ray Kurzweil

    "Only if we meet them half-way." - Dave Snowden

    Sounds like Meta, et al., are desperate to become the dumbest guys in the room.
  • Even for a really dumb cat.

  • by doragasu ( 2717547 ) on Monday October 14, 2024 @08:02AM (#64863065)

    So they start using it to make reports and reply emails. Then they make it participate en business strategy and take risky financial decision. And that's the real threat.

    • They think that because it's obviously more intelligent than them.
      They just don't have or can't keep up with a good role model.

  • I think AI is already a tool that is being used by threat actors

    Imagine a phone call from your child. Phone number is spoofed and gets right through because it's the correct number, the voice sounds enough like your child that the distress in the voice bypasses all reasoning in your mind. Actual photos of your child are altered and sent showing them in a distressing situation. I think this type of threat would fool a lot of people if it was timed right.

  • And with the intelligence they've got in the right enviroment they can be lethal. If you don't believe feel free to wander the africa savanna unarmed.

  • by JoshuaZ ( 1134087 ) on Monday October 14, 2024 @08:38AM (#64863155) Homepage
    LeCun has been saying things like this for a while, and it is striking that he has not grappled much with the people who disagree with him, and has mostly responded with a mix of condescension and incivility, even when they are often other highly accomplished people. Meanwhile, Hinton who shared the prize with LeCun has become deeply concerned about AI risk. See https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/ [cbsnews.com] and https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai [mit.edu]. And it is worth noting that Hinton became convinced of these concerns, by actually going and looking at the arguments people have made. At this point, I'm not sure what would possibly convince LeCun that AI is a threat that wasn't immediately fatal to most of humanity.
    • It is common courtesy to politly ignore old professors when they go senile. Too bad the news aren't that polite.

    • by Somervillain ( 4719341 ) on Monday October 14, 2024 @09:15AM (#64863237)
      LeCun is on the money in regards to today's AI. Generative AI is fancy autocomplete. It's not the Matrix or Terminator. There are dangers with Generative AI, but not in the ability to outsmart us, but the ability to scam us. It helps scammers write better spam....it's too inaccurate to replace real workers, but if it gets a little better it can generate content "close enough" to reality to fool your grandma into investing into fake companies, for example. It will enable a new era of grift. It will generate floods of fake content our human brains won't be able to verify...especially around election cycles. The notion of Haitian immigrants eating your cats and dogs is ridiculous, for example. However, what happens when ChatGPT is able to flood the internet with false stories backing up the claim?...spoof sites that look enough like reputable newspapers that people fall for it? If it did it now, most of us would see enough errors in the text to realize it was ChatGPT, but I believe that will improve and it'll get harder to identify scams. It will also know which scams have been most successful in the past and emulate them.

      I am personally glad LeCun has been saying what I've been saying since I first played with ChatGPT and copilot...I can't say they're "useless" but I am confident that they're too error-riddled to replace a human worker for any job you're willing to pay to have done today.
  • by ZenDragon ( 1205104 ) on Monday October 14, 2024 @08:54AM (#64863187)
    If AI ever gets as smart as a cat we are all doomed. They've been trying to kill us for centuries!
  • AI, currently, is dumb as a box of hammers. It is amazing in its ability to mimic some of the output of a talented human being while completely lacking any kind of intelligence.

    However, human brains aren't made of magic, and there's no reason to believe that our intelligence is anything other than the results of an incredibly complex web of patterns resulting from fairly basic stimulus and response chains.

    Eventually, we're going to make a true AGI. I suspect there will be some hardware development require

  • by Lavandera ( 7308312 ) on Monday October 14, 2024 @09:24AM (#64863259)

    Someone faking your election...

    Someone using it to scam you...

    Someone putting it on flying bombs to bomb cities and hospitals...

  • The problem at the moment isn't so much that AI is "smart."

    The problem is all the people who think it is. It's being moved from a decision-support role into into a decision-making one. Whoops.

  • It doesn't take intelligence to be destructive.

    And given that AI is taught on media full of cats and people being destructive for "Lulz", its almost cetain LLMs will do horrible things.

  • The problem is people who use AI
    Today's AI is mostly harmless and useless. Tomorrow's AI will be a powerful weapon in the hands of bad people
    We need effective defenses

  • by rsilvergun ( 571051 ) on Monday October 14, 2024 @10:39AM (#64863403)
    it's AI that works. Specifically AI that works and suddenely replaces 20-30% of all workers.

    Folks are focused on chatbots and call centers, maybe programmers, but we're seeing lots and lots of other applications, like advanced manufacturing robots, self driving cars, etc.

    Part of that is we got so excited with LLMs that we forgot about general purpose ML.

    We're nowhere's near ready for what's coming. It's another Industrial Revolution.

    Everyone remembers the luddites as just loons that impeded progress.

    Fact is, during both industrial revolutions we had *decades* of technological unemployment before other new tech caught up and got us back to where we are now.

    Historically an Industrial Revolution is like any other: bloody as hell. WWI & WWII weren't accidents. There were high tensions due to high unemployment rates and lots of politicians looking for something to do with the their excess populations they didn't need or want.
  • Is a cat is smarter than something created by humans. Good to know.

    I wholly accept our fuzzy feline overlords.

  • by Rick Schumann ( 4662797 ) on Monday October 14, 2024 @10:58AM (#64863441) Journal
    The real danger from so-called 'AI' is that people will be fooled into believing it is 'smart', rely on it too much, believe it too much, then disasters -- up to and including people getting killed -- will happen.
    Also, some countries (like China, as an example) will not hesitate to use AI in military applications where it has control of weapons systems, leading to what will amount to war crimes.
  • It got a 'memory' recently, apparently, although it never uses it.

    I told it about 250 times (not exaggerating) not to use numbering, bullets or subtitles in the answers it gives me, it can't do that for more than 3 replies, before completely forgetting it and using all 3.

  • by hey! ( 33014 ) on Monday October 14, 2024 @11:56AM (#64863603) Homepage Journal

    What matters is what we allow it to do without requiring human intervention.

  • I think he's probably right that we don't need to yet worry about "ultra-intelligent" AI.

    But I'd also be worried about cats if Silicon Valley idiots were pouring billions of dollars into making them smarter and giving them access to basically everything they can think of.

    Being "dumb as a cat" isn't a virtue, nor is it a reason to ignore something. A coronavirus is way dumber than a cat, and yet...

  • AI can't be dumber than a cat, because I recently pasted two translations of a Classical Japanese text, one of my own and one via another translator and the AI accurately described some of the methods I was using and what differed about them or what it liked about them compaired to the other translator.

    None of this scarequote "dumbness" about how many r's are in strawberry degrades its smartness on actual complex tasks it gives accurate answers for.

    And a lot of the attempts to describe AI as dumb are
  • Humans are also dumber than cats. So we still do not know if LLM may be smarter than humans. Looking at some of them (humans) I suspect the LLMs are.

  • AI is *already* taking jobs. Because while it may have the cognition of a flatworm, it has the intelligence and skills of a college graduate.

    We're losing about 4,000 jobs in the U.S. this year. But it could easily be 10,000 jobs per month next year.

    And ... "AI could impact 40% of working hours and create or destroy millions of jobs by 2027, according to reports from Accenture".

    It's not just the A.I. "replacing" human workers, but the A.I. letting one human worker produce the output of three or four huma

    • by dmay34 ( 6770232 )

      If your job can be replaced by an AI today then your job really wasn't that important to begin with.

      • If you could be replaced by another human being, then A.I. will be probably be able to replace you within a decade.

        That's not all jobs. But it is most jobs.

  • That's a tall order, I don't think there's a single brain cell between our orange and black cats. They manage to get both of them trapped in an upside down garbage can on a weekly basis. Which takes talent because Angry orange is almost as big as the can...
  • META's Content Moderation is so ridiculously terrible.. It relentlessly blocks benign content and fails to block actual nefarious content consistently. The fact he purports that as a positive example of what their AI tech can do might be the best indication of why his opinion of AI isn't worth listening to..

    Disclaimer: I am absolutely not against AI.. I say let it come and enjoy the ride. I am however *adamantly opposed to the torrent of shit software being released and blowhards like this guy wasting bandw

  • It seems like the cliche senior scientist in the end of world movie who doesnâ(TM)t believe to the very end what is about to happen
  • I wasted enough time trying to explain a diagnosis and treatment to somebody when they retort, âoebut I read it on the Internetâ. Now that science fiction movies have hyped the ability of AI to be ( perceived as) so special, just imagine people saying âoebut the AI report says youâ(TM)re wrong, doctorâ
  • .. dumb [pinimg.com].

  • As a cat advocate, I find this title abhorrent. The cat representation expresses their disdain.
  • Not only are cats not "dumb", they arguably *domesticated humans* and used us to conquer the world.

    Ask yourself: when the day arrives that humans have successfully colonized Mars, which animal do you think will become the first to be deliberately taken to Mars to become the mother (via IVF) of countless future companions for colonists?

    Yep. Cats.

  • AI can't replace humans so long as humans still have legal liability for what they produce. Consider the AI lawyer for example. Sure, and AI can produce an output that looks like a legal brief, but a human still has to spend basically just as much time reviewing and re-writing the output as it would have taken for the lawyer to just write it themselves. Sure AI will get better, and do a better job with accuracy, but the human will never be able to fully trust it. That means that every line will always have

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...