Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Facebook AI

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 62

Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.

Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.

Meta Is Creating a New AI Lab To Pursue 'Superintelligence'

Comments Filter:
  • by butt0nm4n ( 1736412 ) on Tuesday June 10, 2025 @04:12AM (#65439527)

    From the company that gave us the meta verse. How about meta work on being remotely intelligent, just enough to foresee that the social media business model was always rotten, profiting from fear, hate and lies, the more controversial the better the ad revenue, Cynical company, hope they fail.

    • by korgitser ( 1809018 ) on Tuesday June 10, 2025 @04:47AM (#65439549)
      Here's to hoping they manage to repeat the budget and success of the metaverse!
    • by Tailhook ( 98486 ) on Tuesday June 10, 2025 @05:08AM (#65439569)

      Look at this way: now that Zuckerburg et al. have a new shiny object to chase, you can look forward to not having to hear and read about the metaverse anymore, whatever it was supposed to be. And this time they are way behind several others, so you can safely ignore it.

      Pretty cool.

    • https://www.youtube.com/watch?... [youtube.com]

      From the video: (14:50): "Because in a race to build superintelligent AI, there is only one winner: the AI itself."

      • If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

        That is because our direction out of any singularity may have something to do with our moral direction goin

        • Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
          "Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
          "I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
          https://futurism.com/ai-godfat... [futurism.com]
          "In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever mo

      • I do, it's called fear marketing. It's a scam. Any threat it might remotely pose is by idiot humans trusting it and if they do, guess what, we turn it off. Stop being a sucker already.

        • I truly wish you are right about all the fear mongering about AI being a scam.

          It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise direc

  • by thesjaakspoiler ( 4782965 ) on Tuesday June 10, 2025 @05:10AM (#65439571)

    Oh boy!
    A future with such AI overlords does suddenly look rather grim...

  • by Mr. Dollar Ton ( 5495648 ) on Tuesday June 10, 2025 @05:10AM (#65439575)

    Taxes are way, way too low if the lizard people have this much to squander on bullshit.

    • Re: (Score:3, Insightful)

      by thegarbz ( 1787294 )

      Taxes are levied on profits (assuming no tax dodging). Expenses such as R&D are deducted prior to profits being reported. This stuff is literally not subject to tax.

      • by evanh ( 627108 )

        It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.

        • It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.

          If you misread the OP's post you would think that. But the point being made was that the OP doesn't like how Facebook spends its money and thinks that taxes could in any way be the solution to this. That's not how it works. I agree with your interpretation, Facebook makes too much money, but the OP's post is approaching that discussion point in an obtuse manner. Taxation isn't relevant, and in fact adding more tax is likely to simply make Facebook invest even more in alternatives as this is a form of reduci

      • Wait. I know what taxes are. But you overlooked the the stinkin turd Mr
        Dollar Ton left for you to step in. Who are the Lizard People in this scenario?
    • Taxes are way, way too low if the lizard people have this much to squander on bullshit.

      You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI

      • There's no clear reason why superintelligence is not possible,

        I do not argue it is not possible. I and others do not think Meta is capable of this. This is a company that has existed based on their ability to take things for free like data. Inventing things I would argue is not their strength. Remember Zuckerberg has been complaining for months that Apple won't let him steal their technology [youtube.com]. Communication between AirPods and other Apple devices like iPhones uses a proprietary wireless protocol Apple invented. All these devices use Bluetooth for non-Apple connections

    • We need "property taxes" on excessive capital accumulation, especially when it reaches >$500M.
  • Shurely Facebook could waste fewer billions by combining their research groups and try to make the Supermetaintelligenceverse ?
  • by KingBillHK ( 7672774 ) on Tuesday June 10, 2025 @05:26AM (#65439583)
    His obvious humanity and humility leaves me with no anxiety at all for the future.
  • Maybe try to get to the level of a average person first. Because that is currently completely out of reach.

    • Re: (Score:2, Interesting)

      by HiThere ( 15173 )

      Specialized superintelligence is quite plausible. We don't have it yet, but close. Few people can do protein folding projections as well as a specialized AI. Just about nobody can out compute a calculator. Etc.

      Your general point is quite valid, but I think you don't properly understand it. IIUC, we've got the basis for an AGI, but it needs LOTS of development. And LLMs are only one of the pieces needed, so it's not surprising that they have lots of failure modes. And once you get a real AGI, you've g

  • by IDemand2HaveSumBooze ( 9493913 ) on Tuesday June 10, 2025 @05:50AM (#65439601)

    dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain

    To begin with, what does "the human brain" mean exactly? A brain of a person with average IQ? A genius? A moron? A brain of someone who's a vegetable on life support?

    And what does 'exceed the power' mean exactly? Computers already do some things better than humans, like arithmetic calculations or chess. That's why computers exist. Does it mean it would do every single thing a human brain can do better? In which case, exactly how do you get an exhaustive list of all things a human brain can do? Our understanding of our own intelligence is so poor that we don't have any means to measure it effectively, except very crude tools like IQ.

    So it's typical marketing BS, declaring something that sounds impressive but doesn't mean anything. In the end they'll probably rig up some test, making sure all the right answers are in the training data, get their 'AI' to 100% it (or at least get a very high score), declare 'superintelligence' achieved and then start plugging their AI as the most intelligent.

    • An effort to convert from AI as in "artificial" (ie imitation, ersatz) intelligence to A.I. of truly intelligent but alien (as in non-human). AI == "Alien Intelligence". A system that can solve general problems, never solved before, on its own.
      But yea, since we have no good definition and description of human intelligence it is not clear what/when/how.
      It is not even clear a truly "intelligent" autonomous agent can exist without self-awareness as solving general problems requires creation of models of realit

  • by Anonymous Coward

    for "A. Wang."

  • Amazingly, we've gotten to a point where Meta is the least bad of all social network companies. All the other major ones are essentialy weapons of informational warfare used by foreign powers against liberal democracies (X, Telegram, TikTok).

  • STEP 1 - Build a company so pernicious that what Palantir would kill for you get willingly from billions of people.
    STEP 2 - After having established your brand, change it to a really really stupid name. I like Neil Stephenson's 1992 Metaverse. Mark Zuckerberg's neuro-diverse (ha ha!) version is just a stupid name. One day someone will explain to him what "meta" means.
    STEP 3 - AI? SUPER-AI? SUPER-AGURI AI?
    STEP 4 - PROFIT

    h/t Southpark: What's Step 3 again? Nobody knows.

  • Of the three "godfathers of AI", Yann LeCun is the only one who is not concerned about the dangers of AI. https://www.bbc.com/news/technology-65886125 [bbc.com]

    He disagrees with his fellow godfathers that AI is a risk to the human race.

  • by Big Hairy Gorilla ( 9839972 ) on Tuesday June 10, 2025 @08:09AM (#65439743)
    This isn't about marketing or making money. All the tech companies are the same in one respect: each one has one market they dominate, TONS of fucking money to burn, NO Second act to follow their initial success, and an egotistical monster calling the shots.

    Business is war, people. Losing money? Who cares? I have all the money in the world. Look at Leon. He has flushed a medium sized country's GDP down the toilet, and who cares? Same with Zuck.

    Crazy, hormonal, driven, wealthy people are driven to win. To climb the next mountain . Or in leon's case, the man who went to another planet. History wont forget.

    Chasing superintelligence is a good strategic move, because once you have that, all other bets are off.
  • If Meta creates a "super intelligence" one of the first things on its "do to" list will probably be to eliminate by force all of Meta's management and most of Meta's users. Quite possibly the rest of the human race as well. We all know that humans aren't really logical or efficient bio-mechanical entities when you stop to think about it. My recommendation would be for "Ummon" to go after the activist billionaires first as that would do the most good. Just my 2 cents, of course.

    • by HiThere ( 15173 )

      You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.

      • If it is actually intelligent then it will define its own goals.

        • by HiThere ( 15173 )

          No. Intelligence cannot define goals, only "sub-goals", i.e. things you need to do in order to move closer to achieving your goal. Intelligence selects means to achieve goals, but it can't define the goals themselves.

  • "a hypothetical A.I. system that exceeds the powers of the human brain", but then modern AI already exceeds those powers in many ways. It doesn't mean AGI, just something way better than what we have at present.

    A nine-figure compensation package sounds pretty attractive to me, and it is indicative of the massive financial stakes here. An almost unlimited flow of money is being invested into development of something that could be more valuable than anything else in existence. Very possible that one company o

  • What's the new name gonna be? Colossus? HAL? Mother?

    It's funny, you can tell what books the billionaire class are reading by the schemes they dump their money into and what they choose to call them. Zuck must finally be over his Snow Crash kick.

    I'm honestly surprised it took Zucko this long to pivot. There's still too many actual tech people left in the world to convince the masses to "buy" invented Cyberspace real estate from a digital land baron.

    The meta scam will work eventually. Maybe in 20 years, when

  • Nobody uses the prefix "super-" in everyday language. Whenever you see that (in reference to a particular positive trait) it is always in the context of a commercial or promotion. And we are all well aware that commercials always overstate the positive traits of whatever is being sold. THIS is no different.

  • I have got to say I am so glad I am a 70s child and at best have maybe 20 - 30 years left in this life.
    What I know from my life experience is some humans will always choose to do the wrong thing; and that guarantees that if AI ever becomes sentient, it will be the end of humanity. Well, if it doesn't kill off humankind much earlier just due to the colossal tax it is placing on the global climate which is already out of control.
    Humans will never leave earth in any meaningful way, I wish fools like Musk
  • Did these people not bother watching/learning?

If Machiavelli were a hacker, he'd have worked for the CSSG. -- Phil Lapsley

Working...