

Meta Is Creating a New AI Lab To Pursue 'Superintelligence' 77
Meta is preparing to unveil a new AI research lab dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, New York Times reported Tuesday, citing four people with the knowledge of the company's plans. From the report: Meta has tapped Alexandr Wang, 28, the founder and chief executive of the A.I. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale employees to the company.
Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Meta has offered seven- to nine-figure compensation packages to dozens of researchers from leading A.I. companies such as OpenAI and Google, with some agreeing to join, according to the people. The new lab is part of a larger reorganization of Meta's A.I. efforts, the people said. The company, which owns Facebook, Instagram and WhatsApp, has recently grappled with internal management struggles over the technology, as well as employee churn and several product releases that fell flat, two of the people said.
Of course they are ... (Score:5, Insightful)
From the company that gave us the meta verse. How about meta work on being remotely intelligent, just enough to foresee that the social media business model was always rotten, profiting from fear, hate and lies, the more controversial the better the ad revenue, Cynical company, hope they fail.
Re:Of course they are ... (Score:5, Funny)
Re:Of course they are ... (Score:5, Insightful)
Look at this way: now that Zuckerburg et al. have a new shiny object to chase, you can look forward to not having to hear and read about the metaverse anymore, whatever it was supposed to be. And this time they are way behind several others, so you can safely ignore it.
Pretty cool.
"You Have No Idea How Terrified AI Scientists Are" (Score:2)
https://www.youtube.com/watch?... [youtube.com]
From the video: (14:50): "Because in a race to build superintelligent AI, there is only one winner: the AI itself."
Re: (Score:2)
If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
That is because our direction out of any singularity may have something to do with our moral direction goin
Re: (Score:2)
Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://futurism.com/ai-godfat... [futurism.com]
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever mo
Re: (Score:1)
I do, it's called fear marketing. It's a scam. Any threat it might remotely pose is by idiot humans trusting it and if they do, guess what, we turn it off. Stop being a sucker already.
Could we "pull the plug" on networked computers? (Score:2)
I truly wish you are right about all the fear mongering about AI being a scam.
It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise direc
Re: (Score:1)
Wow. Did you really write all that or use Chat GPT? Some interesting books in there.
Firstly you are citing fiction, that's a good place to thought experiment, but it is lousy at predictions. I am still waiting for Space 1999, we haven't had a 2001 space odyssey let alone sequels, Nexus 6's are supposed to be 24 years away.
SF is too optimistic about pace of progress.
SF also often appears to miss predicting and exploring history's significant inventions e.g. mobile phones and social media that transformed
Re: (Score:1)
I didn't answer your question.
Yes we can always pull the plug. The wise move would be to make the impact of pulling the plug as minimal as possible. With humans involved, wisdom is highly variable. But if survival is at stake we'd pull the plug. "Nuke the site from orbit"
When an AI can manifest itself in the physical world, e.g. self replicating robots, the risk goes up, bu that feels like a long way off. More obvious is human's manifesting AI in the physical world, starting to take instructions from it, i
Re: (Score:2)
Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.
From the Manga version of "The Two Faces of Tomorrow":
"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
http [mangadex.org]
Re: (Score:1)
Prove me wrong with some evidence and not speculation, I'll be happy to admit I am. A benevolent AI feels a bit like a benevolent dictator fantasy.
This "AI is terrifying" narrative is the same tactic weapon dealers use to sell arms, "this thing is so dangerous, you better get one too, just to be safe"
"AI will rule the world, kill us all, " we better get some of that AI Mr Corporation sir so we can be the ones ruling the world.
The worst thing that will happen is OpenAI will generate even more dross conten
Re: (Score:2)
Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.
I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://duckduckgo.com/?q=exam... [duckduckgo.com]
Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://www.techopedia.com/tim... [techopedia.com]
Or somethi
Re: (Score:1)
Wow. I guess AI is good at finding these references too.
"The fool with a gun" stories points to something older and simpler perhaps, the real risk of "AI" or any tool we make is what the user does with it. The whole "guns aren't dangerous people are" argument. The counterpoint is yes people can sometimes be dangerous, and more so with an assault rifle.
The near miss stories, like our Russian friend who prevented Armageddon sounds like a "wise man with a gun." He was sceptical about the tech and the data an
Re: (Score:2)
Thanks for the conversation. On your point "And why want it in the first place?" That is a insightful point, and I wish more people would think about that. Frankly, I don't think we need AI of any great sort right now, even if it is hard to argue with the value of some current AI systems like machine vision for parts inspection. Most of the "benefits" AI advocates trot out (e.g. solving world hunger, or global climate change, or cancer ,or whatever) are generally issues that have to do with politics and ec
Re: (Score:2)
"huge piece" should be "huge missing piece".
I saw that type when reposting some of that content here:
https://www.reddit.com/r/singu... [reddit.com]
Re: Could we "pull the plug" on networked computer (Score:1)
Did you really write all that ?
Thank you for the conversation too.
I'm in a project investigating how AI could help our organization. So far we've got ideas around predictive analysis (ML) and a better search tool (ChatGPT) for our knowledge capital. The first is hard without data, the second is interesting, but unknown if it's useful or can affect the revenue.
Listen to how we are referring to AI.
AI is a field of comp sci, it's not a thing, its a collection of technologies . What one are we talking about ?
AI and job replacement (Score:2)
I don't know if "enthusiastic about the tech" completely describes my feelings about a short-term employment threat and a longer-term existential threat, but, yeah, neat stuff. Kind of maybe like respecting the cleverness and potential dangerous of a nuclear bomb or a black widow spider?
A related story I submitted: "The Workers Who Lost Their Jobs To AI "
https://news.slashdot.org/stor... [slashdot.org]
Sure maybe there is some braggadocio in Hinton somewhere which we all have. But he just does not come across much that wa
Re: (Score:1)
There is money to be made, attention to be had from identifying threats. I hope it comes from a good place, to keep us safe.
It adds to the noise, and in that we lose the signal. The current iteration of "AI" does have potential for cognitive support, but it can only manifest in the physical world with human help. Inflating its potential into GenAI, Overlord, Jesus , is just dishonest and unhelpful. I got here commenting on Meta picking it up, looking at their past behaviour as a predictor I don't expect
Transcending to a happy singularity? (Score:2)
You wrote: "As useful as capitalism has proved to be, its motivations are primitive and short sighted. How AI is being punted is another example of "bad" capitalism. Bad capitalism has helped wreck the planet more than anything else."
Geoffrey Hinton, as a self-professed socialist, makes a version of your point in the interview previously linked to.
And your point is ultimately the key insight emerging from our discussion, as I reflect on it. AGI or especially ASI may indeed take over someday to humanity's de
"Is AI Apocalypse Inevitable? - Tristan Harris" (Score:2)
Another video echoing the point on the risks of AI combined with "bad" capitalism: https://www.youtube.com/watch?... [youtube.com]
"(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under th
Training the AGI with Facebook data... (Score:5, Insightful)
Oh boy!
A future with such AI overlords does suddenly look rather grim...
Re: (Score:2)
If they are using Facebook data to train it, we have nothing to worry about. There's no intelligence to be found there.
One thing is obvious... (Score:5, Insightful)
Taxes are way, way too low if the lizard people have this much to squander on bullshit.
Re: (Score:3, Insightful)
Taxes are levied on profits (assuming no tax dodging). Expenses such as R&D are deducted prior to profits being reported. This stuff is literally not subject to tax.
Re: (Score:3)
It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.
Re: (Score:2)
It must be dumb comment day. The point being made is there is obviously too much profit in Facebook itself.
If you misread the OP's post you would think that. But the point being made was that the OP doesn't like how Facebook spends its money and thinks that taxes could in any way be the solution to this. That's not how it works. I agree with your interpretation, Facebook makes too much money, but the OP's post is approaching that discussion point in an obtuse manner. Taxation isn't relevant, and in fact adding more tax is likely to simply make Facebook invest even more in alternatives as this is a form of reduci
Re: One thing is obvious... (Score:2)
Dollar Ton left for you to step in. Who are the Lizard People in this scenario?
Re: (Score:2)
Taxes are way, way too low if the lizard people have this much to squander on bullshit.
You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI
Re: (Score:2)
There's no clear reason why superintelligence is not possible,
I do not argue it is not possible. I and others do not think Meta is capable of this. This is a company that has existed based on their ability to take things for free like data. Inventing things I would argue is not their strength. Remember Zuckerberg has been complaining for months that Apple won't let him steal their technology [youtube.com]. Communication between AirPods and other Apple devices like iPhones uses a proprietary wireless protocol Apple invented. All these devices use Bluetooth for non-Apple connections
Re: (Score:2)
I do not argue it is not possible. I and others do not think Meta is capable of this.
The only thing you need is money, which Meta has. Hire good researchers and give them plenty of budget for hardware.
Re: (Score:2)
The only thing you need is money, which Meta has. Hire good researchers and give them plenty of budget for hardware.
Having money is not the problem. They would rather steal it from others rather than spend their own money. In the case of a new wireless protocol; Meta could have done it. They chose to complain that Apple did not give their research to Meta for free.
Re: (Score:2)
Re: (Score:2)
heck even income tax!!! " has offered seven- to nine-figure compensation packages"
Re: (Score:2)
Good luck.
Re: (Score:2)
its not lies, its hallucinations
Re: An army of Indians will do as well. (Score:2)
If it didn't lie just like the human lied, that would be a bug, and it would get fixed quickly.
Merge the groups (Score:2)
Zuckerborg is the right person to lead on this.. (Score:4, Funny)
More like "Superstupidity" (Score:2)
Maybe try to get to the level of a average person first. Because that is currently completely out of reach.
Re: More like "Superstupidity" (Score:2)
It's simulation, not emulation.
Emulation implies you're mimicking function.
We don't know enough about the function of neurons to emulate them.
Re: (Score:2)
Yep. Pretty worthless. Basically a computer without software.
Re: (Score:2, Interesting)
Specialized superintelligence is quite plausible. We don't have it yet, but close. Few people can do protein folding projections as well as a specialized AI. Just about nobody can out compute a calculator. Etc.
Your general point is quite valid, but I think you don't properly understand it. IIUC, we've got the basis for an AGI, but it needs LOTS of development. And LLMs are only one of the pieces needed, so it's not surprising that they have lots of failure modes. And once you get a real AGI, you've g
Re:More like "Superstupidity" (Score:4, Insightful)
Specialized superintelligence is quite plausible. We don't have it yet, but close.
Only if you lack in natural intelligence. And here is a hint: If it is specialized, it is not an intelligence, it is a pocket calculator or the like.
What does that even mean? (Score:5, Insightful)
dedicated to pursuing "superintelligence," a hypothetical A.I. system that exceeds the powers of the human brain
To begin with, what does "the human brain" mean exactly? A brain of a person with average IQ? A genius? A moron? A brain of someone who's a vegetable on life support?
And what does 'exceed the power' mean exactly? Computers already do some things better than humans, like arithmetic calculations or chess. That's why computers exist. Does it mean it would do every single thing a human brain can do better? In which case, exactly how do you get an exhaustive list of all things a human brain can do? Our understanding of our own intelligence is so poor that we don't have any means to measure it effectively, except very crude tools like IQ.
So it's typical marketing BS, declaring something that sounds impressive but doesn't mean anything. In the end they'll probably rig up some test, making sure all the right answers are in the training data, get their 'AI' to 100% it (or at least get a very high score), declare 'superintelligence' achieved and then start plugging their AI as the most intelligent.
Alien Intelligence (Score:2)
An effort to convert from AI as in "artificial" (ie imitation, ersatz) intelligence to A.I. of truly intelligent but alien (as in non-human). AI == "Alien Intelligence". A system that can solve general problems, never solved before, on its own.
But yea, since we have no good definition and description of human intelligence it is not clear what/when/how.
It is not even clear a truly "intelligent" autonomous agent can exist without self-awareness as solving general problems requires creation of models of realit
Pizza delivery (Score:1)
for "A. Wang."
Meta isn't even the worst (Score:1)
Amazingly, we've gotten to a point where Meta is the least bad of all social network companies. All the other major ones are essentialy weapons of informational warfare used by foreign powers against liberal democracies (X, Telegram, TikTok).
STEP 4 - PROFIT (Score:1)
STEP 1 - Build a company so pernicious that what Palantir would kill for you get willingly from billions of people.
STEP 2 - After having established your brand, change it to a really really stupid name. I like Neil Stephenson's 1992 Metaverse. Mark Zuckerberg's neuro-diverse (ha ha!) version is just a stupid name. One day someone will explain to him what "meta" means.
STEP 3 - AI? SUPER-AI? SUPER-AGURI AI?
STEP 4 - PROFIT
h/t Southpark: What's Step 3 again? Nobody knows.
The three godfathers (Score:3)
Of the three "godfathers of AI", Yann LeCun is the only one who is not concerned about the dangers of AI. https://www.bbc.com/news/technology-65886125 [bbc.com]
Re: (Score:3)
I don't think AI in itself is a risk, only the people abusing it (at creation or usage time) or blindly trusting it. Everything always comes down to human nature.
AIs are like any other machine. They're either a benefit or a hazard. If they're a benefit, it's not a problem.
Re:The three godfathers (Score:4, Insightful)
A sufficiently superhuman AI would, itself, be a risk, because it would work to achieve whatever it was designed to achieve, and not worry about any costs it wasn't designed to worry about.
Once you approach human intelligence (even as closely as he currently freely available LLMs) you really need to start worrying about the goals the AI is designed to try to achieve.
Re: (Score:2)
Re: (Score:2)
I repeat my post. With emphasis on human and people.
You're holding it wrong (Score:3)
Business is war, people. Losing money? Who cares? I have all the money in the world. Look at Leon. He has flushed a medium sized country's GDP down the toilet, and who cares? Same with Zuck.
Crazy, hormonal, driven, wealthy people are driven to win. To climb the next mountain . Or in leon's case, the man who went to another planet. History wont forget.
Chasing superintelligence is a good strategic move, because once you have that, all other bets are off.
Irony at its finest (Score:1)
If Meta creates a "super intelligence" one of the first things on its "do to" list will probably be to eliminate by force all of Meta's management and most of Meta's users. Quite possibly the rest of the human race as well. We all know that humans aren't really logical or efficient bio-mechanical entities when you stop to think about it. My recommendation would be for "Ummon" to go after the activist billionaires first as that would do the most good. Just my 2 cents, of course.
Re: (Score:2)
You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.
Re: Irony at its finest (Score:2)
If it is actually intelligent then it will define its own goals.
Re: (Score:2)
No. Intelligence cannot define goals, only "sub-goals", i.e. things you need to do in order to move closer to achieving your goal. Intelligence selects means to achieve goals, but it can't define the goals themselves.
"seven- to nine-figure compensation packages" (Score:2)
"a hypothetical A.I. system that exceeds the powers of the human brain", but then modern AI already exceeds those powers in many ways. It doesn't mean AGI, just something way better than what we have at present.
A nine-figure compensation package sounds pretty attractive to me, and it is indicative of the massive financial stakes here. An almost unlimited flow of money is being invested into development of something that could be more valuable than anything else in existence. Very possible that one company o
Re: "seven- to nine-figure compensation packages" (Score:2)
Re: (Score:2)
>> Nobody is worth "nine figures"
I wouldn't have thought so either, but I guess if you have unique skills and there are $billions on the line it might be worth the investment. If I were younger I'd be hitting the books and trying to work my way into the field. I think I could be satisfied with 6 figures.
Re: (Score:2)
per year
I have unique skills
"tap" me
A new name then? (Score:2)
What's the new name gonna be? Colossus? HAL? Mother?
It's funny, you can tell what books the billionaire class are reading by the schemes they dump their money into and what they choose to call them. Zuck must finally be over his Snow Crash kick.
I'm honestly surprised it took Zucko this long to pivot. There's still too many actual tech people left in the world to convince the masses to "buy" invented Cyberspace real estate from a digital land baron.
The meta scam will work eventually. Maybe in 20 years, when
"Super" is a prefix used only by marketers (Score:2)
Nobody uses the prefix "super-" in everyday language. Whenever you see that (in reference to a particular positive trait) it is always in the context of a commercial or promotion. And we are all well aware that commercials always overstate the positive traits of whatever is being sold. THIS is no different.
So glad I am a 70s Child (Score:2)
What I know from my life experience is some humans will always choose to do the wrong thing; and that guarantees that if AI ever becomes sentient, it will be the end of humanity. Well, if it doesn't kill off humankind much earlier just due to the colossal tax it is placing on the global climate which is already out of control.
Humans will never leave earth in any meaningful way, I wish fools like Musk
Person of Interest (Score:1)