Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil' (ft.com) 34

Margaret Mitchell, chief ethics scientist at Hugging Face and founder of Google's responsible AI team, has dismissed artificial general intelligence as "just vibes and snake oil." Mitchell, who was ousted from Google in 2021, has co-written a paper arguing that AGI should not serve as a guiding principle for the AI industry.

Mitchell contends that both "intelligence" and "general" lack clear definitions in AI contexts, creating what she calls an "illusion of consensus" that allows technologists to pursue any development path under the guise of progress toward AGI. "But as for now, it's just like vibes, vibes and snake oil, which can get you so far. The placebo effect works relatively well," she told FT in an interview. She warns that current AI advancement is creating a "massive rift" between those profiting from the technology and workers losing income as their creative output gets incorporated into AI training data.

AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil'

Comments Filter:
  • ...they hype vastly exceeds reality
    When billions are at stake, hypemongers use over optimistic predictions as bait to attract investment
    I agree with LeCun, AGI will not be achieved using current LLM tech
    New approaches will be needed

    • Sounds like cope.

      The next 18 months will bring wonders and horrors, the timeline is open but the conclusion is the same.

      Intelligence will be commoditized.

      • by gweihir ( 88907 )

        Hahahaha, no. Also note the the hype has now been running for bout 2 years and nothing even remotely like your deranged "prediction" has come true.

        Should you actually have attempted satire, my apologies.

    • by Cassini2 ( 956052 ) on Thursday June 19, 2025 @12:30PM (#65461191)

      It used to be that a photographer could make a living taking a picture in a war zone, and sending it back to HQ for printing in the evening / mornings newspaper. Now, there is no more newspaper to pay the photographer. Worse, if he sells the picture to any random site on the internet, the whole rest of the internet copies that picture. Thus, he gets paid once ($200) for taking that picture, when he used to be able to sell the same picture to hundreds of different newspapers for $200 each.

      The same thing happened to many other creative disciplines. It used to be we had a journalist that wanted to go write a story because it would sell newspapers that would get him paid. Now we don't have the journalist because we don't have the newspaper. AI can copy the story.

      The Music industry no longer pays its creators. There are now Youtube videos that seem like a bot created them.

      AI is coming for computer programmers.

      The danger is that we don't really have a model to pay the people at the start of the food-chain, the original photographers, writers, etc. We have a model where we pay for the AI generated copies, but not the original creators. In a world is drowning in "information", we have a hard time finding valid new useful information.

      We are looking at a societal shift to enshittification. A dumber world for more challenging times.

      • by narcc ( 412956 )

        AI is coming for computer programmers.

        LOL!

      • Your logical conclusion doesn't follow. Your concrete examples are all artistic in nature. Photography, journalism, story authors, musicians. You conclude that because it's harder today to make a living as an artist, it will therefore be harder in the future to make a living as a programmer.

        It's really an aberration of history that in the last 100 years, it's been possible to make a living as an artist. In centuries past, only artists in the king's court could make a living singing, playing instruments, wri

      • The same thing happened to many other creative disciplines. It used to be we had a journalist that wanted to go write a story because it would sell newspapers that would get him paid. Now we don’t have the journalist because we don’t have the newspaper. AI can copy the story.

        The Music industry no longer pays its creators. There are now Youtube videos that seem like a bot created them.

        AI is coming for computer programmers.

        AI didn't do that, the Internet did. Everything that's wrong with the Internet did that. Every damned day someone on this site will say buying the news is dead because you can get it for free somewhere else. It's free news all the way down.

        The music industry... bro, where have you been? Again, welcome to the Internet, free music and streaming and algorithmic recommendations. If you liked this song you'll love this other one that sounds the same from a band you've never heard of that is paid in beans.

        Program

    • by narcc ( 412956 )

      I agree with LeCun, AGI will not be achieved using current LLM tech

      Was this even in question? By anyone other than unimaginative pop sci writers and crackpots, that is.

    • Not a bad FP, but I think we have to consider kinds of progress.

      In terms of theory, I am skeptical how much of the "progress" in AI qualifies as "genuine". We only weakly understand how the DLMs work and our understanding goes down from there. We do not know what constitutes intelligence or consciousness or the human soul or any of that fuzzy stuff. (My current theory is that a lot of it is a compression artifact created by our adoption of language, amplified by a second level of compression from written la

      • by thoper ( 838719 )
        is intelligence an energing property of language?, or is language an emerging propperty of intelligence?, interesting take.
        • is intelligence an energing property of language?, or is language an emerging propperty of intelligence?, interesting take.

          Intelligence is resource expensive, so it needs to be a net benefit to the animal. If an animal can manipulate and improve its local environment, intelligence will help it make better changes. Thus, Intelligence is an emerging property of manipulating the local environment. Language allows an animal to share complex ideas with others, complex ideas require intelligence to create and understand.
          Language, then, is an emerging property of intelligence.

    • by gweihir ( 88907 )

      There is genuine progress, but it is not world-changing at all. Somewhat better search, better bullshit and better bullshit removal, essentially. The latter two will only excite bureaucrats. Also note that this AI hype is one in a series and all ones before followed the same script: Make bombastic claims, some people get rich, what actually results is something but not that much and nowhere near the claims made.

  • AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil'

    This is exactly what ChatGPT told me when evaluating the mission statements of most companies listed on NASDAQ.

  • Definitions (Score:3, Insightful)

    by Artem S. Tashkinov ( 764309 ) on Thursday June 19, 2025 @12:23PM (#65461173) Homepage

    Is this a case of self-promotion, or an attempt to appear relevant when you're not?

    Current LLMs are extremely powerful, automating tasks that seemed impossible three years ago and rendering many jobs obsolete. They just work. Yes, they haven't discovered new science yet, but that doesn't make them useless.

    As for AGI, look no further than this research [arxiv.org] but those tasks are hellishly difficult. One could argue that the average person lacks "general" intelligence because only the brightest minds can solve these tasks. We are not yet close to something truly "general" but even at this stage it's staggering what LLMs have achieved.

    • Re: (Score:3, Funny)

      by bjoast ( 1310293 )
      Stop! According to Slashdot dogma, LLMs are no more impressive than chatbots from the 1990s. They only "predict the next token" and thus cannot possibly constitute a major technological leap. If you're impressed by this technology, you must be wrong.
      • by narcc ( 412956 )

        They only "predict the next token" and thus cannot possibly constitute a major technological leap.

        You think they do something else?

    • Based on documented history and observation over hundreds of years, humans are fallible. Imperfect. Capable of building societies armed with an average intelligence that makes even a semi-intelligent human cringe thinking about the sheer number of fucking morons on this planet.

      THAT, is what an LLM has to compete against. That is ALL an LLM has to compete against. Because even toddler-grade AI is capable of replacing a LOT of fucking morons being paid to do jobs today.

      The problem with discussing or even

  • by dvice ( 6309704 ) on Thursday June 19, 2025 @12:36PM (#65461207)

    Unlike the "AI ethic" claims, Google Deepmind has a definition for AGI, with progressive steps and a method to test whether you are going towards it or not:
    https://aibusiness.com/ml/what... [aibusiness.com]

    • > Google Deepmind has a definition for AGI,

      Thats not a definition, its just a set of subjective heuristics for measuring. And its not even as useful as the basic turing test, which is a much more concise yardstick.

      Definitions of AI all seem to come down to "we'll know it when we see it" which is the exact same thing as saying "we have no idea what it is"

    • by narcc ( 412956 )

      Silly nonsense. You can't possibly take that seriously.

    • Unlike the "AI ethic" claims, Google Deepmind has a definition for AGI, with progressive steps and a method to test whether you are going towards it or not: https://aibusiness.com/ml/what... [aibusiness.com]

      Gotta love the idiotic irony of the last step, which defines AGI as outperforming 100% of all humans.

      As if we meatsacks will be smart enough to tell AGI what it is, and what it isn’t. I suppose next we’ll assume we can turn it off.

    • by gweihir ( 88907 )

      No. What Google has is a clevery crafted lie, because they pretend that these steps are within reach. And you fell for it.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (9) Dammit, little-endian systems *are* more consistent!

Working...