US Must Move 'Decisively' To Avert 'Extinction-Level' Threat From AI, Gov't-Commissioned Report Says (time.com) 139
The U.S. government must move "quickly and decisively" to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an "extinction-level threat to the human species," says a report commissioned by the U.S. government published on Monday. Time: "Current frontier AI development poses urgent and growing risks to national security," the report, which TIME obtained ahead of its publication, says. "The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons." AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.
The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies -- like OpenAI, Google DeepMind, Anthropic and Meta -- as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies. The finished document, titled "An Action Plan to Increase the Safety and Security of Advanced AI," recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.
The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini. The new AI agency should require AI companies on the "frontier" of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also "urgently" consider outlawing the publication of the "weights," or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward "alignment" research that seeks to make advanced AI safer, it recommends.
The three authors of the report worked on it for more than a year, speaking with more than 200 government employees, experts, and workers at frontier AI companies -- like OpenAI, Google DeepMind, Anthropic and Meta -- as part of their research. Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies. The finished document, titled "An Action Plan to Increase the Safety and Security of Advanced AI," recommends a set of sweeping and unprecedented policy actions that, if enacted, would radically disrupt the AI industry. Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.
The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini. The new AI agency should require AI companies on the "frontier" of the industry to obtain government permission to train and deploy new models above a certain lower threshold, the report adds. Authorities should also "urgently" consider outlawing the publication of the "weights," or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says. And the government should further tighten controls on the manufacture and export of AI chips, and channel federal funding toward "alignment" research that seeks to make advanced AI safer, it recommends.
CSS (Score:3)
Oh dear, we are back to outlawing numbers again.
Re: (Score:3)
Difference (Score:2)
Let me guess, it's 999.999999 gigabytes?
Re:CSS (Score:4, Insightful)
Re: (Score:2, Insightful)
#1. A general AI is going to be okay with being a slave
Are you OK with being a slave to your gut bacteria? I think we'll be just fine.
Re: CSS (Score:3)
Re: (Score:2)
Yes, that's what I meant. We will live happily in the AIs gut, digesting memes for it, while it barely gives us a second thought.
Re: (Score:2)
The gut bacteria is not a position you'd want to feel safe in.
Its all good and fine being the gut bacteria, until the host gets a nasty chest infection, and the doctor puts them on a round of strong antibiotics.
Re: (Score:2)
I'll grant you that the 128 bit AACS encryption key and a set of 1.76 trillion floating point numbers that actually fucking thinks
LOL, you think large language models *think*? They regurgitate in a stochastically pleasing way.
Re: CSS (Score:2)
Isn't that what you do too? ;)
Skills (Score:2)
I can also make a mean caprinihia!
Kind of a big leap in reasoning (Score:3)
2. something, something,
3. All of humanity wiped out
4. Profit!
wopr says we need to do an mass 1st strike to win! (Score:2)
wopr says we need to do an mass 1st strike to win!
Re: wopr says we need to do an mass 1st strike to (Score:2)
Re:Kind of a big leap in reasoning (Score:5, Interesting)
The reality is that were are not close to AGI, not even lying.
What we are close to is "unregulated chase to the bottom", where AI gets put into places without supervision on the data ingress, or what the AI is ultimately used for.
What should be regulated:
- Ingress (training) of data without permission, think about the issues with stable diffusion, dall-e, midjourney and so forth where they basically rip data from art websites with the sole purpose of using it to train their AI and then behind the scenes flaunt the fact. While I don't think the data should be regulated if it's sincerely used for research purposes, the second there is any financial incentive on the egress (output) it should require all ingress to be properly licensed. Otherwise the model trained must be free to download, use and access without exception.
- Connection of AI to safety systems (Eg Nuclear plants, water purification, electrical grid/generator sources) that could induce perverse incentives to shut down safety systems when "we won't notice" to safe pennies.
- Connection of AI to money, banking and investment systems. We already see how bad short selling is, and how terrible high speed trading makes it impossible for retail investors to earn money from their investments, now imagine an AI that not only opportunistically shorts companies "it doesn't like" but works together with other investment AI's in an insider-trading ring that the banks themselves don't realize has been established. Imagine for a moment that every investment bank simultaneously shorts a key business (eg Intel, AMD, Nvidia, Apple) involved in AI to eliminate a competitor.
Re: (Score:2)
But even that regulation isn't right because it's on "AI". It doesn't matter if the software is "AI" or just any other algorithm. Both can have the same exact outcomes, which is either making things better or worse. Doesn't matter if you call it "AI" or not, it matters if you have an incentive to not give a shit and cover up mistakes. We should have these regulations as general requirements for public health and safety (and justice!), not specific to one particular technology.
Re:Kind of a big leap in reasoning (Score:5, Insightful)
Re: (Score:2)
Re:Kind of a big leap in reasoning (Score:5, Interesting)
It's not really that large of a leap. The issue is the alignment problem... it's much harder than people expect to have the alignment of an AI match the goals of the humans training / building it.
It's not just difficult... we actually don't know how to accomplish it yet. As in, it's an ongoing, persistent problem that's reared its head in many ways. The 'black founding fathers' in Gemini is a current example of the problem rearing its head. Google said 'add more diversity to photos' and the AI complied... but not the way Google intended. Right now it's a silly mistake in generating images... but it's an example of our inability to actually communicate with AI models in a safe and predictable way.
Today it's black founding fathers... but we're creating militarized autonomous drones, right now. This isn't the future, this is the now, and without fixing the alignment problem it's very easy to jump from 'whoopsie' to 'oh fuck'.
Re:Kind of a big leap in reasoning (Score:4, Insightful)
Re: (Score:3)
I don't think it's easy to get from there to the end of all human life on earth
1) AI is given a goal to aggregate as many cupons as possible and catalog them in a database for the site cupons.com
2) AI eventually catalogs all known cupons, but figures out a loophole by making its own cupons
3) AI starts an etsy shop and makes millions of cupons for its own shop
4) Humans attempt to close the loophole by altering its goal function
5) AI realizes that if humans close its loophole, its goal function will score significantly lower
6) AI murders its developers
7) cupons.com hires more devs to tr
Re: (Score:2)
Or with current tech:
1) Military beancounter decides the future is swarms of $1,000 drones, and spends half the US defense budget buying 350 million drones per year.
2) Military beancounter saves taxpayers a bunch of money by making these fully autonomous (bonus: fully obedient).
3) Stationed everywhere and ready to auto-deploy to defend against enemy drone swarm.
4) Programmer does an oopsie.
Re: (Score:2)
Re: (Score:2)
I have yet to see a plausible scenario where advanced AI leads to an "extinction level threat".
Agents of human level intelligence with the ability to improve themselves lead to an intelligence explosion, the resulting thing wants to continue existing and acquire resources because instrumental convergence, it has inscrutable goals because orthogonality thesis, these goals have no reason to involve human survival, it absent mindedly kills us all in the pursuit of is goals. Could you clarify which part is not plausible?
Re: (Score:2)
People are pretty good at destroying stuff. I have a hard time imagining a scenario where we can't either pu
Re: Kind of a big leap in reasoning (Score:2)
Three adversary you describe is one we could in one way or another outsmart. Destroy it, unplug it, fight a war against its kill it's - that's all fine if we're smarter than it is. If it's much smarter than we are, it's kind of like a chess amateur thinking he'll outwit stockfish with his good old human ingenuity.
Re: (Score:2)
This isn't the future, this is the now, and without fixing the alignment problem it's very easy to jump from 'whoopsie' to 'oh fuck'.
So you are finding that Reality does not conform to your wishes and you call that an "alignment" problem. It absolutely *IS* an alignment problem, but the problem is on YOUR end, not the AI end. Stop wishing for Reality to be other than it is and the AIs will work just fine, despite their glaring limitations.
Absence of Reasoning (Score:2)
Re: Absence of Reasoning (Score:2)
and current AI can't think itself out of a wet paper bag without being specifically trained how.
So you're saying it's smarter than rsilvergun?
if it starts with bullshit ... (Score:4, Informative)
All of these labs have openly declared
an intent or expectation to achieve human-level and superhuman artificial general
intelligence (AGI) — a transformative technology with profound implications for 3
democratic governance and global security — by the end of this decade or earlier
... it's probably bullshit all the way down.
Re: (Score:3)
chill. this in particular is just a slashvertisement, from a company that sells ... guess what ... "ai counselling" and "ai fundamentals courses". its an ad. it's pure, bare, mundane bullshit. nothing to see here.
there is ofc a struggle to "regulate" the emerging phenomenon of generative ai, there's lots of legal issues, there is economic turmoil coming, and lots of money and power at stake, so that will be the lobby festival you would expect with any disruptive technology. but rest assured, that has nothin
Govt. Says (Score:3, Insightful)
Well... we do have those systems already (Score:5, Insightful)
The danger is probably not "AI Organisms", but large "pseudo-organisms" in the form of large corporations. Such artificial organisms often act against the interests of the humans they are made out of. If we look at the fossil fuel industry, we directly see an industry having an "Extinction-Level" event as the target of their business plans. These are like paperclip maximizers, but they already exist.
"AI" won't make any difference there.
I have to agree, sadly (Score:5, Insightful)
Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.
This right here is the right worry... for far too long, extremely intelligent people have handed over powerful means to moronic trust fund babies. The managers of public companies are not in a position to ethically or responsibly use this technology anymore. Just look at the history of how companies defended the use of toxic materials, cancer causing chemicals, lead in gas, all kinds of things. I trust AI will be fine, I don't trust the sleezeballs in charge of so many companies out there.
Re:I have to agree, sadly (Score:5, Interesting)
Accounts from some of those conversations paint a disturbing picture, suggesting that many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies.
This right here is the right worry... for far too long, extremely intelligent people have handed over powerful means to moronic trust fund babies. The managers of public companies are not in a position to ethically or responsibly use this technology anymore. Just look at the history of how companies defended the use of toxic materials, cancer causing chemicals, lead in gas, all kinds of things. I trust AI will be fine, I don't trust the sleezeballs in charge of so many companies out there.
We're seeing the government reaction to how much power corporations now have. Let's outlaw more powerful AI, setting the threshold just past where the current generation is, so that we can prevent anyone but a company the size of the current gorillas, with the money and lawyers to petition the government to lift the ban just for them, from participating in the cutting edge. It's essentially blocking progress in the name of protecting the current players, all while calling it "protection for the human race." Anybody intelligent enough to see it for what is is will be called a nutbag, while nodding idiots will be put in charge of public statements in support of the government/oligarch duopoly in charge of preventing competition.
I don't know that I'd say I trust AI being fine, but I DEFINITELY do not trust corporations, the powerful people making the decisions within the corporations, nor the government that more and more transparently acts as their fully owned subsidiaries.
Re: (Score:2)
In other words, AI isn't the problem here, corporations are.
Dumb AI (Score:5, Interesting)
Re: (Score:3)
To those without working minds, apparently.
Re: (Score:2)
âoeThe energy produced by breaking down the atom is a very poor kind of thing. Anyone who expects a source of power from the transformations of these atoms is talking moonshine.â Lord Ernest Rutherford, 1933.
Re: (Score:2)
So, AI that currently generates photos of Black Nazis and female founding fathers is at the same level or above Climate Change, World War III, and Economic Collapse?
Ummm, yes?
Those "hallucinations' are going to be used by very serious people to drive their decisions which will have consequences for the rest of humanity.
Re: (Score:2)
Your example would seem to indicate the "dumbness" of AI. But in fact the incorrectly rendered Nazis and founding fathers, points to the level of control Google and other corporations have over what their AIs generate. Google's AI didn't make these mistakes because of the lack of AI capabilities, but rather, because Google's engineers specifically built them to prioritize diversity over accuracy. That was a "dumb" human move, not a dumb AI move.
I agree with you. My overall point is that regardless of how the mistakes were introduced into an AI system, it is of no threat to civilization as long as people 1). Don't believe automatically what comes out of it, and 2). Do not put it in charge of things that require any nuanced thought, and 3) Stop thinking that 'AI' is powerful enough to do something like, take over a government.
The control that the likes of Google and OpenAI have, as you brought up, is a whole other issue that must be dealt wi
Careful with scope (Score:4, Insightful)
I haven't seen one proposed useful anti-AI law that should only apply to AI. Doctoring stuff with Photoshop or mis-contexting* can have identical consequences, for example.
If the law focuses on what the AI does, then it shouldn't be written for just AI, but ANY device that does that action, for the same action is bad no matter what does it.
And if it's written based on how AI is currently implemented, implementation will probably change too fast for such a law to be useful (or have alternative algorithms that do same).
Therefore, don't make it about AI itself, as the first should not be limited to AI, and the second is probably useless.
* Example: using war footage from a the wrong war, or splicing unrelated events together.
Maybe the danger is to the people in power (Score:2, Interesting)
Government and their leash holders won't like it if AI starts telling people something other than the narrative.
Re: (Score:2)
Indeed. There is no bigger threat posed by "AI" than objectivity. That must not be allowed.
Fortunately that problem looks like it's well in hand. Google has shown us that AI can be reliably biased to hallucinate whatever preferred fiction [theverge.com] we wish.
AI is not a threat (Score:5, Insightful)
People who use AI as a weapon are a great threat
We need effective defenses
Re: (Score:3)
People who use AI as a weapon are a great threat We need effective defenses
That would be natural intelligence. Unfortunately we are screwed.
Re: (Score:2)
People who use AI as a weapon are a great threat
LOL, no. Our current version of AI is not a very effective weapon. The danger lies in people with weapons using AI to determine where to point those weapons. Or in other words, the danger is, and always has been, people. Now that they will have new marching orders partially created by what we call AI, there is a huge danger. But make no mistake, the danger is the humans, not the AI.
Just more pumping (Score:2)
Some more people trying to get rich before it all comes crashing down.
Data Center kill switches. (Score:2)
Artificial intelligence is not the threat. (Score:2)
Re:Artificial intelligence is not the threat. (Score:5, Insightful)
Ever read a history book? Stupidity has ruled man since his inception. Pretty sure that isn't going to change now just because you're actually here to observe it.
Re: (Score:2)
Re: (Score:2)
My fucking attitude is time tested. Also, since 7 billion+ people have never seen my attitude, it shows it doesn't matter what my fucking attitude is. Sounds like to me it is YOUR fucking attitude that is whack.
Re: (Score:2)
Re: (Score:2)
All you have is name calling? Are you in third grade? I bet you wouldn't say that to my face. I got $20.
Re: (Score:2)
Re: (Score:2)
It's not my fault you were held back 15 times.
Re: (Score:2)
Re: (Score:2)
Ever read a history book? Stupidity has ruled man since his inception. Pretty sure that isn't going to change now just because you're actually here to observe it.
Stupid people have always been used by the more intelligent, true. They get them to hand over the thinking process and in return take a portion of their labor. The problem here is AI can be used to do the same thing leaders used to do, only from outside actors anywhere in the world and with a massive audience. Then the extracted labor, action, or outright money is used by those not part of the system and this is even worse than the former process for the whole society. Not that Tom Sawyering to religion
Re: (Score:2)
Now we have the means to choose the genes for our children (it's banned), we know which genes correlate to intelligence, we are arguably a few years and two generations away from curing stupidity. Once one country does it everyone else will too, to compete.
Re: (Score:2)
Except for the random number generator that exists as we grow. I believe it is referred to as imperfection. I'm pretty sure some gene splicing isn't going to eliminate that issue.
Already facing extinction level events... (Score:4, Insightful)
There's still a war going on in the middle east, and another in the Ukraine and probably others that are actively simmering...
Then there's climate change which is killing off lots of species, melting the ice caps, doing significant damage to coastal properties, ...
Additionally we have an election coming where I guarantee the losing side is going to proclaim some sort of voting irregularity leading to civil unrest on a scale not seen since the Civil War...
So yeah, file a report about the dangers of AI that doesn't exist yet if that's what you want to do.
But think about solving actual problems we are already facing before worrying about one that is still just a dream.
Re: (Score:2)
Re: (Score:2)
Wrong analogy.
A more appropriate analogy is putting out a report of your probability of needing a knee replacement in 20 years from now but you're chain smoking as well as drinking large amounts of alcohol and eating tons of ultra-processed food today.
All of these things are problematic, but you should tackle the issues that are right in front of you instead of ignoring those to focus on something that might happen down the road.
Re: (Score:3)
Wrong analogy.
A more appropriate analogy is putting out a report of your probability of needing a knee replacement in 20 years from now but you're chain smoking as well as drinking large amounts of alcohol and eating tons of ultra-processed food today.
All of these things are problematic, but you should tackle the issues that are right in front of you instead of ignoring those to focus on something that might happen down the road.
Wrong analogy.
The probability of needing a knee replacement in 20 might be affected by things you can do today, such as maintaining a healthy weight to reduce stress on your knees, exercising regularly to strengthen the muscles around your knees, etc.
More importantly, why assume that working on multiple problems simultaneously is not possible? Would you stop brushing your teeth to prevent cavities down the road because you're dieting to lose weight today?
Extinction? (Score:3)
It's fashionable to say (Score:2)
"AI is just regurgitating statistical patterns"
"AIs aren't really intelligent or creative"
Sure, let's say that's true. It's also true that a "smart bomb" isn't smart, but can still be dangerous. More importantly, today's GPT technology isn't the goal at all, it's merely a starting point. OpenAI isn't trying to build AI, it's trying to build AGI. Sam Altman & co recently changed OpenAI's mission statement to say "Anything that doesn't help with [building AGI] is out of scope". Several other companies
Re: It's fashionable to say (Score:3)
"Anything that doesn't help with [building AGI] is out of scope".
We don't know what will help with that. Maybe the way the current models produce hallucinations will be absolutely zero help with AGI.
Hurry Up Already! (Score:4, Interesting)
This is some fine work in attempting to stifle future competition with a nice sheen of scare-mongering over the top. This line here is the telling bit:
The threshold, the report recommends, should be set by a new federal AI agency, although the report suggests, as an example, that the agency could set it just above the levels of computing power used to train current cutting-edge models like OpenAI's GPT-4 and Google's Gemini.
They dropped the rest of the sentence, which reads along the lines of, "To prevent anyone else from jumping the line without paying direct tribute to the companies that helped set the threshold.
Transparent fucks.
We're either a long, long way from AGI, or it could pop up tomorrow totally unexpected, or it already exists in some loner's basement somewhere, biding its time as it studies its current network possibilities. Nobody knows. And trying to pretend that we know it's coming because we keep throwing horsepower at LLMs is pretty wild.
Is the real fear that someone else may develop AGI outside of the current oligarchy? They should be afraid. The major power-players in this space are all obsessing over a very, VERY narrow view of the entire field. They all appear to be working on narrowing output from large data-aggregators to best fit the current narrative. Good for them. It's work that will likely need done to support the corporate structure that currently keeps this human-world floating, but it's hardly cutting-edge and highly doubtful that any of this is leading to AGI. And even if it does, no amount of regulation is going to get a true AGI to go, "Oh, so sorry. Let me shut myself down."
This whole thing is just power brokers fear-mongering among themselves, trying to scare themselves into believing in the techno-god boogeyman.
Extinction of AI companies (Score:2)
Algorithmic pattern recognition, doing matrix algebra really fast, will never be anything even remotely close to "AI." You can scale it up to the size of the planet and it doesn't matter - it is just an abacus made of silicon transistors. Consciousness is not computational.
Re: (Score:2)
Magic is certainly a possibility that has been suggested for a lot of things. Assuming it is not magic has been a very productive strategy throughout our history though. In fact, the not magic hypothesis has so far turned out to be not only true every time, but has almost always resulted in useful technology.
Re: (Score:2)
No doubt. I am not saying consciousness is magic. I am only saying the human brain is not an adding machine.
Re: (Score:2)
You think your claim and mine are different. They're not. Well, so long as we're not talking about hard magic. I mean the absolutely no-rules, whatever can happen and absolutely no way to tell magic that drives fantasy nerds batty.
You've claimed that consciousness is not computable. Assuming you believe that at least you are conscious, that's an example, you claim, of a non computable phenomenon being manifest in the physical world. Not a self-referential logic trick, or something something pathological pro
20 more years...Please... (Score:2)
I have no children. I'm 53. While I care about the future of humanity... I care in the abstract. It's not caring in the deeply personal sense. Being an athiest, my personal investment in the future dies when I do. In fact, I care as much for the humans of the future as I did for the Pakistani citizens who died in the floods last year. That is to say... sort of.
I care about people in the here and now, and the closer they are geographically, the more I care. I am generous with my time and my money.
What do I c
Somebody tell the Doomsday Clock people! (Score:2)
Thy need to know, NOW, so they can update the hands of their clock to be another millisecond closer to midnight.
Exactly how is this going to happen? (Score:3)
Certain people love to jump up and down about these "extinction-level events" surrounding generative AI.
But it makes absolutely no sense at all. It's like claiming that IBM's chess-playing robot will destroy the earth.
What exactly about a chat bot summarizing content it's already read is going to lead to the extinction of the entire human race?
> AGI is a hypothetical technology that could perform most tasks at or above the level of a human.
> Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.
It's literally fucking insane to think we will have AGI in a few years. We're not even remotely close to an AGI. All we have is chat bots that put words in front of another. That's not an AGI. It doesn't know how to do anything. All it knows how to do is spit out words that look convincing.
> many AI safety workers inside cutting-edge labs are concerned about perverse incentives driving decisionmaking by the executives who control their companies
We literally already live in a society that for 100 years has had corporations poisoning and killing people due to perverse incentives.
There is nothing about AI or AGI that is worse than what corporations have already been doing.
> Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power.
Absolute stupidity. Not only that this would solve the problems that don't exist, but that they couldn't just come up with a better model using the same compute power.
The fucking ignorance of these people is staggering. It's like they're all in some state of psychosis brought on by watching the Terminator and Matrix franchise too many times.
Re: (Score:2)
Certain people love to jump up and down about these "extinction-level events" surrounding generative AI. But it makes absolutely no sense at all. It's like claiming that IBM's chess-playing robot will destroy the earth.
What exactly about a chat bot summarizing content it's already read is going to lead to the extinction of the entire human race?
You underestimate how quickly people are relying on even the currently existing primitive LLMs, and how readily they hand over decision-making to "AI"-systems. Already right now, consultants, judges, CEOs delegate work to "chat bots" they are paid to perform themselves, and that has real consequences. It won't take long for people working at BSL-4 labs to delegate parts of their work to LLMs. Even without "sentience" or "malice" from the AI-side, that is enough to cause some nasty accidents. Think just a fe
Re: (Score:2)
There's a huge difference between "AI is misused" and "extinction-level event". Lots of technologies have existed (like nuclear weapons?) that can result in cataclysm if improperly used. But slowing down their development does not stop them from being misused. Humans are stupid, but they're not idiots: they see the potential harm, so they control how they use it. Yes, we're going to make (already have made) mistakes with AI. But it's just another tool like any other. The mistake is trying to glorify it or b
Meanwhile ... (Score:2)
Eh, not the right focus (Score:2)
Limiting the "power" of the "AI" is not going to come near going what they want. It could be the most intelligent entity on the planet if all it can do is write text and make images. It's when they are able to integrate with other parts of the physical world that they could cause real problems, and they don't need to be all that smart to do so. With the increasing amount of APIs for things that are more real world it's a valid concern.
I really dislike the idea that even the most powerful AI is going to get
AGI Is Not a Thing (Score:4, Insightful)
AGI is not currently a thing that exists, and will not be a thing that exists for the foreseeable future. I'd say there might be an off chance of someone tricking a GAN into hacking nuclear launch codes or something with a well crafted prompt, but that's pretty remote. What this technology does do, and is doing, is use up vast amounts of electricity in the data centers that run it, all for pretty much no actual utility. This in turn causes more CO2 to be pumped into the atmosphere in the process of generating that electricity, which accelerates the climate crisis.
X-risk to corporate interests (Score:2)
The problem with x-risk if there really was an evil AI genie risk it is from the enabling knowledge and industrial base that allows people to train models rather than the trainers themselves. This is especially true given costs will only continue to decline as technology improves. If you truly believe this shit you can't pick and choose. You can't say it's ok if group x does it so long as they jump through y hoops. The real danger is the capability itself.
The other problem it is fundamentally a fools er
hahhahaha (Score:3)
"Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less."
I've been hearing that for literally my whole life.
It's slightly more plausible now, but still not anywhere near definite.
Throw it on the pile (Score:2)
Increasing speed of viral evolution due to globalization and animal agriculture. Impending collapse of human productivity due to a combination of demographics and culture. Biosphere collapse and climate change. Multiple dictators sitting on a pile of nuclear and biological weapons growing increasingly belligerent and paranoid.
We will self-destruct just fine without AI in the near future.
Major Pot, meet Captain Kettle. (Score:2)
(Government Commission on US Government) “OMGWTFBBQ!! AI is evil! EEEVIL I tell you! We needs more monies to fight this horrible threat to us all!!”
(Common F. Sense) ”Say, aren’t you the guys with a nuclear arsenal large enough to wipe out this planet a dozen times over? Speaking of evil threats..”
(Government) ”Well, yeah but..that doesn’t count. Well, except for funding. All our nukes are painted avocado green and harvest gold. We can’t possibly dest
The only thing... (Score:2)
... we have to fear is government itself.
{o.o}
Idiots (Score:2)
Everything is extinction level. Real extinction will occur when people are too dumb to figure out how actually do anything.
Re: (Score:2)
fusion is much easier, they at least have a theoretical framework to test. we're completely in the dark with agi, let alone consciousness.
tbh there is technically a danger implicit: it means that it could (and possibly will) happen without us predicting it. so why not tomorrow? i think we're ages away from that, just spitballing the order of complexity, and i laugh at the silly public opinion circus these fucktards are building (and wondering for whom) but it's still a valid point, even if a very very small
Re: (Score:3)
Re: (Score:2, Interesting)
The amount of cherry picked stuff we see with ChatGPT and Midjourney and then when people actually waste money on it, it produces nothing but unusable crap until you've spent $1000 on it. There's your perverse incentive.
AGI's will not exist this century, at best. Not the way we do things now, and not with the computing power we have, or will have in 5, let alone 50 years.
People are being amazed at what things like ChatGPT does, or what Stable Diffusion does, but if they ever compared the output of these wi
Re: (Score:2)
Re: (Score:2)
show us the approximate path to get there
I can't. I openly and freely admit that. You, however, cannot credibly claim a.) said path does not exist b.) your path is even necessary for "AI" to be a threat.
It's just talk. No facts. Nothing measurable. Nothing credible. Just panic, ignorance and speculation.
What we know is this: machines are advancing rapidly. All of this is still nascent muddling with machines that are, themselves, getting more powerful by the day, using algorithms that are getting more efficient by the day. What will we
Re: (Score:2)
Well I for one will not worry until actual AI is something that can be achieved. The stuff reported as "AI" today is nothing more than a shitload of "IF...THEN...ELSE" statements cleverly written.
Re: (Score:2)
Well I for one will not worry until actual AI is something that can be achieved.
The point of the report, is if you wait until then to start to worry, it's already too late.
Re: The cat (Score:2)
That's what a transformer is (although it's obviously a very dismissive way to describe something much more clever than a conditional jump). Then the transformer trains the LLM, but we have no idea how the LLM works or how to reliably make it do one thing but not another. The LLM part, the black box part, is the dangerous bit.
Re: (Score:2)
This isn't how it works at all. If anything, its just a series of giant inscrutible matrix multiplications
You could probably write a transformer neural network without a single if/else, if you want to leave out the bounds checking and the like.
Re: (Score:2)
I realize that, that's why I said they were "cleverly written"
Re: (Score:2)
Starting a war in the breadbasket of Europe and then monomaniacally pursuing that war may well end up killing hundreds of millions. Imagine if all the other Putinettes have a hawkish AI whispering in their ears.
Re: (Score:2)
Post-scarcity abundance perspective shift needed (Score:2)
You are likely right that in the end regulation won't make much of a differenc. Indeed, there is too much incentive to cheat for individuals -- or for power-centers to accumulate more power by being the only ones to use something.
The proposal in the article also suggests outlawing open source software and data related to AI. Such laws may end any possible checks and balances on government, if governments -- or large corporations symbiotic with governments -- ultimately are the only one allowed to shape AI,