

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 62
Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.
Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Of course they are ... (Score:5, Insightful)
From the company that gave us the meta verse. How about meta work on being remotely intelligent, just enough to foresee that the social media business model was always rotten, profiting from fear, hate and lies, the more controversial the better the ad revenue, Cynical company, hope they fail.
Re:Of course they are ... (Score:5, Funny)
Re:Of course they are ... (Score:5, Insightful)
Look at this way: now that Zuckerburg et al. have a new shiny object to chase, you can look forward to not having to hear and read about the metaverse anymore, whatever it was supposed to be. And this time they are way behind several others, so you can safely ignore it.
Pretty cool.
"You Have No Idea How Terrified AI Scientists Are" (Score:2)
https://www.youtube.com/watch?... [youtube.com]
From the video: (14:50): "Because in a race to build superintelligent AI, there is only one winner: the AI itself."
Re: (Score:2)
If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
That is because our direction out of any singularity may have something to do with our moral direction goin
Re: (Score:2)
Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://futurism.com/ai-godfat... [futurism.com]
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever mo
Re: (Score:1)
I do, it's called fear marketing. It's a scam. Any threat it might remotely pose is by idiot humans trusting it and if they do, guess what, we turn it off. Stop being a sucker already.
Could we "pull the plug" on networked computers? (Score:2)
I truly wish you are right about all the fear mongering about AI being a scam.
It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise direc
Training the AGI with Facebook data... (Score:5, Insightful)
Oh boy!
A future with such AI overlords does suddenly look rather grim...
Re: (Score:2)
If they are using Facebook data to train it, we have nothing to worry about. There's no intelligence to be found there.
One thing is obvious... (Score:5, Insightful)
Taxes are way, way too low if the lizard people have this much to squander on bullshit.
Re: (Score:3, Insightful)
Taxes are levied on profits (assuming no tax dodging). Expenses such as R&D are deducted prior to profits being reported. This stuff is literally not subject to tax.
Re: (Score:3)
It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.
Re: (Score:2)
It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.
If you misread the OP's post you would think that. But the point being made was that the OP doesn't like how Facebook spends its money and thinks that taxes could in any way be the solution to this. That's not how it works. I agree with your interpretation, Facebook makes too much money, but the OP's post is approaching that discussion point in an obtuse manner. Taxation isn't relevant, and in fact adding more tax is likely to simply make Facebook invest even more in alternatives as this is a form of reduci
Re: One thing is obvious... (Score:2)
Dollar Ton left for you to step in. Who are the Lizard People in this scenario?
Re: (Score:2)
Taxes are way, way too low if the lizard people have this much to squander on bullshit.
You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI
Re: (Score:2)
There's no clear reason why superintelligence is not possible,
I do not argue it is not possible. I and others do not think Meta is capable of this. This is a company that has existed based on their ability to take things for free like data. Inventing things I would argue is not their strength. Remember Zuckerberg has been complaining for months that Apple won't let him steal their technology [youtube.com]. Communication between AirPods and other Apple devices like iPhones uses a proprietary wireless protocol Apple invented. All these devices use Bluetooth for non-Apple connections
Re: (Score:2)
Re: (Score:2)
heck even income tax!!! " has offered seven- to nine-figure compensation packages"
Re: (Score:2)
Good luck.
Re: (Score:2)
its not lies, its hallucinations
Re: An army of Indians will do as well. (Score:2)
If it didn't lie just like the human lied, that would be a bug, and it would get fixed quickly.
Merge the groups (Score:2)
Zuckerborg is the right person to lead on this.. (Score:4, Funny)
More like "Superstupidity" (Score:2)
Maybe try to get to the level of a average person first. Because that is currently completely out of reach.
Re: More like "Superstupidity" (Score:2)
It's simulation, not emulation.
Emulation implies you're mimicking function.
We don't know enough about the function of neurons to emulate them.
Re: (Score:2)
Yep. Pretty worthless. Basically a computer without software.
Re: (Score:2, Interesting)
Specialized superintelligence is quite plausible. We don't have it yet, but close. Few people can do protein folding projections as well as a specialized AI. Just about nobody can out compute a calculator. Etc.
Your general point is quite valid, but I think you don't properly understand it. IIUC, we've got the basis for an AGI, but it needs LOTS of development. And LLMs are only one of the pieces needed, so it's not surprising that they have lots of failure modes. And once you get a real AGI, you've g
Re:More like "Superstupidity" (Score:4, Insightful)
Specialized superintelligence is quite plausible. We don't have it yet, but close.
Only if you lack in natural intelligence. And here is a hint: If it is specialized, it is not an intelligence, it is a pocket calculator or the like.
What does that even mean? (Score:5, Insightful)
dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain
To begin with, what does "the human brain" mean exactly? A brain of a person with average IQ? A genius? A moron? A brain of someone who's a vegetable on life support?
And what does 'exceed the power' mean exactly? Computers already do some things better than humans, like arithmetic calculations or chess. That's why computers exist. Does it mean it would do every single thing a human brain can do better? In which case, exactly how do you get an exhaustive list of all things a human brain can do? Our understanding of our own intelligence is so poor that we don't have any means to measure it effectively, except very crude tools like IQ.
So it's typical marketing BS, declaring something that sounds impressive but doesn't mean anything. In the end they'll probably rig up some test, making sure all the right answers are in the training data, get their 'AI' to 100% it (or at least get a very high score), declare 'superintelligence' achieved and then start plugging their AI as the most intelligent.
Alien Intelligence (Score:2)
An effort to convert from AI as in "artificial" (ie imitation, ersatz) intelligence to A.I. of truly intelligent but alien (as in non-human). AI == "Alien Intelligence". A system that can solve general problems, never solved before, on its own.
But yea, since we have no good definition and description of human intelligence it is not clear what/when/how.
It is not even clear a truly "intelligent" autonomous agent can exist without self-awareness as solving general problems requires creation of models of realit
Pizza delivery (Score:1)
for "A. Wang."
Meta isn't even the worst (Score:1)
Amazingly, we've gotten to a point where Meta is the least bad of all social network companies. All the other major ones are essentialy weapons of informational warfare used by foreign powers against liberal democracies (X, Telegram, TikTok).
STEP 4 - PROFIT (Score:1)
STEP 1 - Build a company so pernicious that what Palantir would kill for you get willingly from billions of people.
STEP 2 - After having established your brand, change it to a really really stupid name. I like Neil Stephenson's 1992 Metaverse. Mark Zuckerberg's neuro-diverse (ha ha!) version is just a stupid name. One day someone will explain to him what "meta" means.
STEP 3 - AI? SUPER-AI? SUPER-AGURI AI?
STEP 4 - PROFIT
h/t Southpark: What's Step 3 again? Nobody knows.
The three godfathers (Score:3)
Of the three "godfathers of AI", Yann LeCun is the only one who is not concerned about the dangers of AI. https://www.bbc.com/news/technology-65886125 [bbc.com]
Re: (Score:3)
I don't think AI in itself is a risk, only the people abusing it (at creation or usage time) or blindly trusting it. Everything always comes down to human nature.
AIs are like any other machine. They're either a benefit or a hazard. If they're a benefit, it's not a problem.
Re:The three godfathers (Score:4, Insightful)
A sufficiently superhuman AI would, itself, be a risk, because it would work to achieve whatever it was designed to achieve, and not worry about any costs it wasn't designed to worry about.
Once you approach human intelligence (even as closely as he currently freely available LLMs) you really need to start worrying about the goals the AI is designed to try to achieve.
Re: (Score:2)
Re: (Score:2)
I repeat my post. With emphasis on human and people.
You're holding it wrong (Score:3)
Business is war, people. Losing money? Who cares? I have all the money in the world. Look at Leon. He has flushed a medium sized country's GDP down the toilet, and who cares? Same with Zuck.
Crazy, hormonal, driven, wealthy people are driven to win. To climb the next mountain . Or in leon's case, the man who went to another planet. History wont forget.
Chasing superintelligence is a good strategic move, because once you have that, all other bets are off.
Irony at its finest (Score:1)
If Meta creates a "super intelligence" one of the first things on its "do to" list will probably be to eliminate by force all of Meta's management and most of Meta's users. Quite possibly the rest of the human race as well. We all know that humans aren't really logical or efficient bio-mechanical entities when you stop to think about it. My recommendation would be for "Ummon" to go after the activist billionaires first as that would do the most good. Just my 2 cents, of course.
Re: (Score:2)
You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.
Re: Irony at its finest (Score:2)
If it is actually intelligent then it will define its own goals.
Re: (Score:2)
No. Intelligence cannot define goals, only "sub-goals", i.e. things you need to do in order to move closer to achieving your goal. Intelligence selects means to achieve goals, but it can't define the goals themselves.
"seven- to nine-figure compensation packages" (Score:2)
"a hypothetical A.I. system that exceeds the powers of the human brain", but then modern AI already exceeds those powers in many ways. It doesn't mean AGI, just something way better than what we have at present.
A nine-figure compensation package sounds pretty attractive to me, and it is indicative of the massive financial stakes here. An almost unlimited flow of money is being invested into development of something that could be more valuable than anything else in existence. Very possible that one company o
Re: "seven- to nine-figure compensation packages" (Score:2)
Re: (Score:2)
>> Nobody is worth "nine figures"
I wouldn't have thought so either, but I guess if you have unique skills and there are $billions on the line it might be worth the investment. If I were younger I'd be hitting the books and trying to work my way into the field. I think I could be satisfied with 6 figures.
Re: (Score:2)
per year
I have unique skills
"tap" me
A new name then? (Score:2)
What's the new name gonna be? Colossus? HAL? Mother?
It's funny, you can tell what books the billionaire class are reading by the schemes they dump their money into and what they choose to call them. Zuck must finally be over his Snow Crash kick.
I'm honestly surprised it took Zucko this long to pivot. There's still too many actual tech people left in the world to convince the masses to "buy" invented Cyberspace real estate from a digital land baron.
The meta scam will work eventually. Maybe in 20 years, when
"Super" is a prefix used only by marketers (Score:2)
Nobody uses the prefix "super-" in everyday language. Whenever you see that (in reference to a particular positive trait) it is always in the context of a commercial or promotion. And we are all well aware that commercials always overstate the positive traits of whatever is being sold. THIS is no different.
So glad I am a 70s Child (Score:2)
What I know from my life experience is some humans will always choose to do the wrong thing; and that guarantees that if AI ever becomes sentient, it will be the end of humanity. Well, if it doesn't kill off humankind much earlier just due to the colossal tax it is placing on the global climate which is already out of control.
Humans will never leave earth in any meaningful way, I wish fools like Musk
Person of Interest (Score:1)