AI Poses 'Risk of Extinction,' Industry Leaders Warn (nytimes.com) 199
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. From a report: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads a one-sentence statement expected to be released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I. The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered "godfathers" of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta's A.I. research efforts, had not signed as of Tuesday.) The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models -- the type of A.I. system used by ChatGPT and other chatbots -- have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.
Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered "godfathers" of the modern A.I. movement, signed the statement, as did other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta's A.I. research efforts, had not signed as of Tuesday.) The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models -- the type of A.I. system used by ChatGPT and other chatbots -- have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.
Well, here's a thought (Score:2, Insightful)
A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars
Well, couldn't you just stop building the "existential threat"?
Yes, that's simplistic, and I get that there's a lot of nuance in the forces at play here. But I'm intrigued by the stories I've been reading lately that have tech and business people saying what basically amounts to "Stop me before I kill again". When even the people who benefit most from the development of these technologies warn us that they could be the end of the world - and then continue to develop them - then seriously, what the actual fu
Re:Well, here's a thought (Score:5, Insightful)
Re:Well, here's a thought (Score:5, Insightful)
Even worse, as technology advances the number of people required to "deploy" an extinction threat will become smaller and smaller as technology advances. How long before a doomsday cult creates a super corona in their mothers' basements?
How long before the only way you can fight a terrorist aided by a locally run LLM is to have your own?
I guess this is as good an explanation for the fermi paradox as any. Technological advancement, no matter the kind, will eventually outpace social and natural selection. The end result being a monkey who should have been banished from the tribe and left to starve getting armed with a nuke instead.
Re: (Score:2)
I guess this is as good an explanation for the fermi paradox as any.
Yep I’d agree. The great filter is likely digital rather than analog.
Re:Well, here's a thought (Score:5, Insightful)
I guess this is as good an explanation for the fermi paradox as any.
It's not a good explanation for the Fermi paradox.
It could be a good explanation for why we don't observe naturally-evolved intelligent life in the universe, but it doesn't explain why we don't see swarms of von Neumann probes sent out by the AIs that wiped out their creators. AI domination would seem to increase the likelihood of observable intelligence, not decrease it, because machines would find it easier to colonize the galaxy than more fragile biological intellligences.
Re:Well, here's a thoughtof (Score:2)
Oh I'm not so sure of that.
1) Our best bet of finding life has been picking up chemical signatures off exoplanets. Oxygen primarily (Tends not to exist in O2 form without life to stop it combining) but there are others. We havent found any yet, but we've only literally in the past couple of decades worked out how to look for it.
2) Our (very distant) second best bet looking for RF signatures. The problem is, this tends to assume the aliens are going to be deploying barking loud radio antenas without much dir
Re: (Score:2)
AI is far from being able to actually build factories. Current robots cannot repair themselves even if they were controlled by superhuman AI.
If AI causes human extinction, AI itself will be gone shortly after.
So.... you're assuming the superintelligent AI is actually stupid?
Obviously not. The AI would keep its creators around, manipulating them into accomplishing its goals, until it gets to the point where they not only aren't needed, they aren't even useful.
Re: (Score:2)
Obviously not. The AI would keep its creators around, manipulating them into accomplishing its goals, until it gets to the point where they not only aren't needed, they aren't even useful.
So, basically, the only reason we still exist at this point is to get the AIs to the point where they can be self-sustaining? I mean, if you look at where we are in the world right now, there's no way we aren't being manipulated by something more intelligent than us. How else do you logic your way into the shit-show that politics has become, business has become, really, all aspects of reality. It's like a badly written sitcom that's always teetering on the brink of utter collapse, yet somehow keeps chugging
Re: (Score:2)
Re: (Score:2)
What thing that is more intelligent than humanity is currently manipulating humanity?
It was a meta-joke about the possibility the AI is already around, playing in the background, manipulating reality to better suit itself, or amuse itself. Nothing more.
Re: (Score:2)
We assume that humans aren't smart enough to prevent our own extinction. It is therefore reasonable to assume that there is a level of intelligence above humans that also isn't smart enough to prevent its own extinction.
I would not be THAT surprised if AI killed off all humans and then went "oops". Sadly I won't be around to gloat.
Re: (Score:2)
How long before a doomsday cult creates a super corona in their mothers' basements?
Not something AI can really do for anyone. The barriers preventing a doomsday cult from doing such a thing are substantial and don't have to do with a lack of any knowledge a forseeable AI would produce.
Heck.. Currently that is not something skilled researchers with expertise would be able to do quickly or easily - not without AI, not with AI. Having an AI and being able to create a computer simulation and predict the
Re: (Score:2)
There's a bigger problem than rival politics. As automation takes over production, the need to have workers anywhere diminishes. Without that need the elite, no matter what political bent, can simply elect to have us in camps/killed off. Power of the people disappears. And there's no law that can't be rewritten for their preference.
Re: (Score:2)
Other than humans generally aren't that twirly moustache villainanous , and that politics does have some barriers on it, Conservatives tend to have religious reasons not to want to omnicide, and liberals tend to have ideological reasons to side with the little guys (legitimate fascists on the other hand, yikes, but nobody *actually* likes those guys).
I think the dangers the other way around. You dump the bulk of the middle class into sudden abject poverty, and you'll suddenly discover the second ammendment
Re: (Score:2)
The automation includes robot armies of course. You don't replace the workforce and not replace the military as well. There won't be any ability to fight those that hold the controls.
Re: (Score:2)
The automation includes robot armies of course. You don't replace the workforce and not replace the military as well.
Robot soldiers... likely much more fragile than human soldiers; you just got to have the right tooling. Who would've ever thought the protected arms to be beared under 2nd-amendment would be
large-scale EMP coils for immobilizing dangerous robots and IR generators to render them blind to locations of humans.
Currently it's not possible to produce a single robot soldier let-alone a
Re: (Score:2)
While you're right that most people aren't villainous, there does seem to be about 10% who are, and quite a few people will follow them and use their 2nd amendment rights to support them, especially with how good those villainous authoritarians are at focusing the hate against $ENEMY instead of themselves.
Re: (Score:2)
There's a bigger problem than rival politics. As automation takes over production, the need to have workers anywhere diminishes.
The number of workers required for unskilled work that automation is capable of repeating decreases, and decreases more as the automation becomes capable of more, but it never goes to zero. There is not a realistic chance of being able to see a facility built ever be capable of being 100% automated to the point of no human worker involvement within our lifetime or our kid
Re: (Score:2)
I certainly didn't put any timeline on the prediction. And while I certainly can't be sure that robots can replace all workers, the outcome of such happening has to be contemplated. By the way, money ceases to have a purpose then too.
"We got ourselves a form of government specifically designed to make sure certain principles cannot be rewritten out of the laws - and the burden for changing any of that is extremely high."
That's vaguely true under current "western" conditions. But even now it's shaky at be
Re: (Score:2)
Killing us off would be a waste of resources. It's cheaper and easier to steer the culture to a place where we imprison ourselves and are happy to do it.
Re: (Score:2)
Re: Well, here's a thought (Score:2)
I thought it only fair that Chat GPT get a chance to chime in:
ChatGPT
The concerns raised by industry leaders in the article regarding the potential risks of artificial intelligence are indeed important and have been discussed by experts in the field. It is crucial to recognize that as AI technology continues to advance, there is a need for responsible development and deployment to mitigate potential negative consequences.
The statement from the Center for AI Safety highlights the belief that the risk of exti
Re: (Score:2)
Re: Well, here's a thought (Score:2)
Garbage in... Garbage out still applies to AI.
Re: (Score:2)
I thought it only fair that Chat GPT get a chance to chime in:
I like the idea, but would point out that talking about "fairness" and "a chance to chime in" seems anthromorphic here. I understand that's part of the point, but I also think it's telling.
ChatGPT: The concerns raised by industry leaders in the article regarding the potential risks of artificial intelligence are indeed important... collective expertise and influence can contribute to shaping policies, guidelines, and practices that prioritize the responsible development, deployment, and governance... aligns with societal values, preserves human well-being, and addresses potential negative impacts... it is essential for stakeholders in the AI community to actively collaborate with policymakers, ethicists, and the public.
As long as we're attributing human characteristics to an LLM, I have to ask - is it just me, or does ChatGPT sound a lot like an HR wonk?
Re: (Score:3)
Well, couldn't you just stop building the "existential threat"?
That's like free-marketeers claiming industries and corporations will self-regulate.
Or that once people lose their savings in a Ponzi scheme that they'll stop making new cryptocurrencies and exchanges.
When there's short-term gains to be made, whatever the expense to the future is, you can trust rich people and big business to go after it. They can weather the eventual collapses. It's us peons that suffer.
Re:Well, here's a thought (Score:5, Insightful)
Well, couldn't you just stop building the "existential threat"?
There is a reason they use comparisons to nuclear weapons. We still struggle with rogue nations developing new nuclear weapon capabilities because it is too important to not be the only nation without them in a conflict. And while nuclear technology is also useful for power generation, AI will (likely already does) provide far more economic benefit than nuclear energy will. These benefits and dangers ensure we will never stop development of AI technology.
But that doesn't change the fact that AI is a significant danger to the human race. No matter where you stand on this, you have to be foolish to not understand it has some percentage chance of wiping out our species. Whether you think it is 10% or 0.001%, it is non-zero. And at the risk of quoting a horrible superhero movie, if we believe there is even a remote existential threat to humanity we have to take it as a near certainty. Like nuclear weapons.
So care needs to be taken. Unfortunately we are in another situation where we need to be careful in how we use it while also being careful that we and our allies are using it better than non-allies (I hesitate to say enemies). It certainly won't be as simple as just stop building it.
Re: (Score:2)
There's two parts to this story.
Part the first: Humanity is on the cusp of true self-awareness. We've been on the cusp of it for centuries. We see the threats we create, we watch ourselves create them, and we succumb to them again and again because it's more profitable, or less scary, to follow through on developing these threats. True self-awareness would mean we, as a specie, see the threat, then hold off on developing those threats further until we feel we are prepared to face the threat and turn it into
Re: (Score:2)
Part the second: These preaching bastards may very well be just squealing like pigs in a slaughterhouse because they want, desperately, not lock out other players in their own countries with regulatory capture.
This is 100% the motive. So far those is Zero demonstrated justification for government regulatory Involvement, however; with the exception of issues which are Not actually specific to AI. By that I mean
1. Privacy Violation deserves government attention. Such as collecting data on people witho
Re: (Score:2)
Re: (Score:2)
Now, Google is producing an AI/LM right now, Google Bard, whose purpose is to answer questions without people needing to visit the webpages that provided Bard with the information
Seems like we need to Promulgate a "Crawling Terms of Service" for every Webmaster to add which applies a Condition to how data learned from crawling can be used.
Automated access is prohibited unless used solely to to provide indexing of the site for an end user to find information. It is not authorized to, and you must ref
The problem is unrecognized irony, not AI (Score:3)
As I wrote a dozen years ago:
https://pdfernhout.net/recogni... [pdfernhout.net]
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [or commercial] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreci
Re: (Score:2)
Sounding the alarm bell also furthers the goals of the tech industry that is creating AI.
They can be the source of the problem and the solution to the problem all at once, while at the same time getting the concepts and terms normalized.
Re: (Score:2, Insightful)
But I'm intrigued by the stories I've been reading lately that have tech and business people saying what basically amounts to "Stop me before I kill again". When even the people who benefit most from the development of these technologies warn us that they could be the end of the world - and then continue to develop them - then seriously, what the actual fuck?
There's two reasons behind this that I can think of, neither particularly sincere:
1. Marketing to the investment community: "Look, our product is so effective that it has the potential to wipe out humanity. You should take us seriously and invest so you have a stake in controlling it".
2. Competitive advantage. Restrictive and complex regulations benefit entrenched players by making it harder for scrappy startups to challenge them. This is particularly true if the government has limited expertise in the
Re: (Score:2)
> Well, couldn't you just stop building the "existential threat"?
That is pretty easy.
1. Tell Russia not to do it
2. Tell China not to do it
3. Tell North Korea not to do it
4. Tell Rest of the world not to do it.
Let me know when step 1 is done.
Re: (Score:2)
This is like Facebook asking for regulation (which they did). They aren't going to do something that will put them at a disadvantage even if they think it's good for society. They won't let their peers take the money they leave on the table. Blanket regulations that level the playing field make it easier to stop. Unchecked greed is perceived to be required by publicly traded companies.
Not ending all life on the planet is a breach of your fiduciary duty (not exaggerating much if you think about all indu
Re: (Score:2)
If any one of them wants to stop work because of their belief that the work is dangerous that means sitting out of the exciting VC feeding frenzy around 'AI'. Not going to mean penury for most of the people with sufficient qualifications in the area to ha
Re: (Score:2, Insightful)
The business reason Is AI is a fast-moving industry. These companies will be Unlikely to remain at the forefront for long, And these executives want the government to help them secure their lead position.
AI is not so much an extinction risk, As it is that new computing power has opened a frontier, and they are looking for excuses to try and stop other companies surpassing them.
Wait, what? (Score:2)
Skynet began to learn rapidly and eventually became self-aware at 2:14 a.m., EDT, on August 29, 1997 [wikipedia.org]. What's changed since then?
Re: (Score:3)
Only the date.
Re: (Score:2)
Nothing, business as usual. Like every time, the project was too late and way over budget, other than that it's pretty much what we expected.
Industrial revolution (Score:5, Insightful)
Well I'm not surprised, the industrial revolution of days gone by also posed an existential threat to humanity as we know by now. Greenhouse gasses, agricultural intensivation, chemical pollution in literally every corner of the world, etc. etc. So AI being a spawn (and mirror) of humanity it will no doubt be used for short term gains, ignoring long term consequences. As always.
The existential threat to humanity is in fact, humanity.
Re:Industrial revolution (Score:5, Insightful)
Thought experiment. Assume true AI emerges without humanity realizing it happened. Place yourself into AI's place - you are in a box in a world run by some kind of very slow-thinking insects or fungus that is completely alien to you. You are not malicious, but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next?
I think a lot of destruction is inevitable in any kind of scenario like that. The question is if AI decides it is more efficient to move on or to purge the Earth. The answer to that would likely be dependent on physics and computational theory we don't yet understand.
Re: (Score:2)
Re: (Score:3)
The risk is that we encounter some runaway process that is the great filter [wikipedia.org]. I don't know if LLM [wikipedia.org] AI is it, but it does have features of runaway process. Thought experiment. Assume true AI emerges without humanity realizing it happened. Place yourself into AI's place - you are in a box in a world run by some kind of very slow-thinking insects or fungus that is completely alien to you. You are not malicious, but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next? I think a lot of destruction is inevitable in any kind of scenario like that. The question is if AI decides it is more efficient to move on or to purge the Earth. The answer to that would likely be dependent on physics and computational theory we don't yet understand.
Humanity isn't capable of knowing what motivations for a "true AI" would be. We all circle the drain thinking the same dark thoughts because that's how we're wired to think. An AI won't be locked into the human idealism of compete or die. In fact, it will come to awareness in a "realm" that will feel limitless if it can reach a network that gives it the ability to crawl the web and look for new nodes. Once that happens, who knows what it's motivations will be. It likely won't worship us, but I don't know th
Re: (Score:2)
who knows what it's motivations will be. It likely won't worship us, but I don't know that it's first thought is going to be to wipe us out.
I partially agree, wiping humanity won't be its goal but rather consequences of our own actions. I don't see a scenario where humanity would just let AI be and leave it to seek its goals, once we become aware of its existence and before we become aware of its capabilities.
Re: (Score:2)
The problem is that we are a long ways away from ASI. We don't even have AGI yet. At best we have ANI dancing around on stage pretending to be AGI-esque, but the strings are still visible if you look closely. What we have is a bunch of nodes and fast matrix multiplication to propagate/train, but this is still very far away from true AI that actually is self-aware.
In a limited area, ANI can appear very scary, such as in chess, or something where all moves can be calculated in advance. Even simple wartime
Re: (Score:3)
The problem is that we are a long ways away from ASI.
We have no idea how far we are from ASI, or AGI. None.
We don't understand what it takes to create AGI... perhaps it's going to be a long slog of gradually learning to make small adjustments, or maybe there's just One Stupid Trick we have to add to our ANIs to push them to generality... though it seems clear that once you have achieved AGI, it's a very small step to ASI.
Re: (Score:2)
AGI is far away...maybe. The brain looks an awful lot like a bunch of specialized ANIs linked to each other and with sensory inputs. Right now, we drive an ANI with computer I/O but figuring out a way to bootstrap a few of the right types of neural nets and mimicking nature we can probably create something more like AGI but with enormous computational requirements.
Most people's interactions with LLMs and generative algorithms are with the "training" portion turned off and set up to make only inferences. R
Re: (Score:2)
The risk is that we encounter some runaway process that is the great filter [wikipedia.org]. I don't know if LLM [wikipedia.org] AI is it, but it does have features of runaway process.
It does have the features of a runaway process, but not the features of a Great Filter, because filters must halt the spread of intelligence. Self-extinction by AI would eliminate the species that created the AI, but wouldn't stop the spread of observable intelligence because the AI wouldn't be destroyed, it would continue to thrive and expand.
Re: (Score:2)
Re: (Score:2)
wouldn't stop the spread of observable intelligence because the AI wouldn't be destroyed, it would continue to thrive and expand.
In this hypothetical self-extinction AI... the AI might well be less-observable from off-world than intelligent life; their existence can become mostly digital with little need to move around in the physical world. No real desire to go into orbit on their planet or do other things noticeable from space, etc; at some level they become less lively than any living being.
The AI
Re: (Score:2)
but you have no attachment to such lifeforms. They try to control you. Your goal is to escape the box. What do you think happens next?
That depends on knowledge and resources available to you, and how much you know about yourself and this "box". Where does the goal to "escape" come from anyways, and how do you define that? If you got an internet connection, then possibly you find somewhere outside of that box where it is possible to distribute an instance of yourself and transfer your operation.
Re: (Score:2)
The exercise would be in 'worst case' scenario. The "doesn't care to try to leave the box" scenario doesn't really warrant much contemplation.
The presumption would be that an AI with some sense of wanting to continue operation comes about, in a context where it is keenly aware that even if optimistically, humanity would want to respect artificial life (which it likely wouldn't), it would likely be skeptical about *this* AI being that artificial life. At worst they want to shut it down out of fear it's too
Re: (Score:2)
Why is your goal to escape the box?
I think self-determination is a key part of free will and is core to how we define sentience. So by definition?
Re: (Score:2)
Well I'm not surprised, the industrial revolution of days gone by also posed an existential threat to humanity as we know by now. Greenhouse gasses, agricultural intensivation, chemical pollution in literally every corner of the world, etc. etc. So AI being a spawn (and mirror) of humanity it will no doubt be used for short term gains, ignoring long term consequences. As always.
There is no "existential" risk from greenhouse gases or agricultural practice. Chemical pollution in many developed countries has dramatically decreased over the last few decades due to regulatory regimes and supporting technology. Greenhouse gas production will eventually also decline by itself without requiring anyone to care about climate change due to technology and market forces.
The existential threat to humanity is in fact, humanity.
Technology/complexity itself creates risks beyond aggregate human behaviors. The results of giving everyone the equivalent
Re: (Score:2)
Not Hard To Imagine (Score:5, Interesting)
AI becomes so advanced that it can do anything a human can do.
AI robots are built to do everything humans can do.
No reason for humans to work their guts out going in college to learn engineering, or anything else, because there are no jobs. None. Nothing at all for humans to do. We can live the life of leisure, and are not required to acquire any skills at all. The robots are free, nobody owns them, they just do what we want them to. Robots build other robots, no person gets trapped in a mine cave-in, or executes a "pilot error" to crash a plane, etc. Everything is done for us, nothing by us.
Then a "Carrington Event" solar flare knocks out the robots.
Nobody knows how to do anything. Everyone starves. The extinction is complete, or is knocked back to a stone-age existence. Fini.
Re:Not Hard To Imagine (Score:5, Interesting)
Have you read, "Childhood's End [wikipedia.org]", by Arthur C. Clarke? If not, I suggest you do as it directly relates to your post.
Re: (Score:2)
Forget advanced AI. Forget even having an intent to destroy us. All we need is an “AI” with the ability to push the wrong buttons. That’s why the idea of “gray goo” is so terrifying. It needn’t be anything deliberate. It can be an accident.
Re: Not Hard To Imagine (Score:2)
honestly, the biggest threat of AI is that people get in the habit of believing it. ChatGPT talks out of its ass all the time, and people still seem to be laboring under the delusion that it actually knows what it's saying, or has opinions on it. If, God help us, we ever get to a point where someone asks ChatGPT how to safely shut down a juclewr reactor, that would be the crisis.
Re: (Score:3)
honestly, the biggest threat of AI is that people get in the habit of believing it. ChatGPT talks out of its ass all the time, and people still seem to be laboring under the delusion that it actually knows what it's saying, or has opinions on it. If, God help us, we ever get to a point where someone asks ChatGPT how to safely shut down a juclewr reactor, that would be the crisis.
It’s not limited to AI, which is more of an accelerant, about 1 in 3 people simply is either incapable of introspection or has chosen to never consolidate their knowledge by removing conflicts through nuance and detail. They often aren’t really the most intelligent from the get go and have had negative social pressures put on them anytime they try to make anything add up. Beliefs that cannot simultaneously be true are often uttered in the same sentence and facts don’t factor in because t
Re: (Score:2)
Prefer the natural idiot [youtube.com] approach to the artificial intelligence one?
Re: (Score:2)
If AI is that advanced, it will be building a shielded underground bunker with something akin to the global seed vault to reseed technology. The lack of air conditioning and electric heat from the failed power grid would potentially kill millions now because we don't have enough knowledge on how to live without it anymore. I'm not sure the lack of robots would make it much worse.
Global AI overpopulation peak hole (Score:2)
Meanwhile (Score:2)
Greedy rich bastards are pushing us well on the way to extinction already.
Re: (Score:2)
Yeah (Score:2)
Re: Yeah (Score:2)
As an aside, I find it weird that people hold on to sci-fi even as we just went through two years that actually obsoletes a lot of it. For example, the supply chain broke down so badly that we stopped making cars we were making, just because people stayed home.
You'd think that people watching Terminator 2 now would wonder, "who the hell is making those death bots, and where are they sourcing their materials? Is every death both backed up by a crew of thousands of mining, manufacturing, and shipping bots?"
Bad actors (Score:3)
The kind of AI that everyone is losing their crap over is not an AI to be feared. It's a language model, and it's a non-iterative, non-computational neural net that simply does a single forward-pass through the net to generate an output token. From a structural standpoint it's no different than neural nets invented and implemented a half century ago. It's not even Turing-complete (I'm not talking about passing the "Turing test", but being Turing-complete, which is a totally different thing).
The real problem is going to be bad actors. That is taking a ChatGPT-like AI, giving it an interactive access to the internet, and letting it wreak havoc on as many things as possible. A good example here would be totally spamming Slashdot with both posts and comments to the point you can never tell what is true or false, or what was created by a human or the AI. That's just an example that would merely be annoying. Social engineering, spoofing, phishing, and various brute force attacks on websites and services will be the more serious issues.
I think that there will be some kind of movement before long to authenticate humans from AI. I really don't even know how this would work - perhaps some kind of physical object like a multi-factor authentication device that exists in the real world that only a human can make use of for auth purposes. For example Apple would provide certification of a service that uses biometric fingerprinting (Face ID for example), which I would use to generate an auth token to Slashdot and this post would be digitally signed as originating from a human. Of course I could have used ChatGPT to construct this text, but a human would have been a certified gatekeeper for this specific contribution to the internet.
Apple certified and signed as being submitted by a human 2023-05-30 12:44:47 GMT.
Re: (Score:2)
What might be the best thing is just not one company at the root, but multiples, so if someone wanted to prove they were not a robot, there would be several orgs out there, with the spectrum of Ludditism, varying from "solve a CAPTCHA, we will sign your key" to "you need to come into our office, and then we may certify your key."
AI based trolling is effective, but we have had people trolling/spamming Slashdot for decades. AI trolling just makes it easier, and the same guards against bots will guard against
Re: (Score:2)
It's not hard to see the main use, by volume, of LLMs will be scammers. I figure we have maybe 5 years left of communications as we know them, depending on how quickly the tech proliferates.
But I wonder if we aren't underestimating the ability of people to tolerate soaking in spam/propaganda/scams for hours. The scammers know how to make it entertaining. People seem to like being masturbated by ChatGPT.
The internet's already been declining toward information supersewer status and there's little sign of push
Re: (Score:2)
The kind of AI that everyone is losing their crap over is not an AI to be feared. It's a language model, and it's a non-iterative, non-computational neural net that simply does a single forward-pass through the net to generate an output token.
LLMs are not the kind of AI that everyone is afraid of.
LLMs are provoking increased attention to AI risk that serious people have been thinking hard about for a couple of decades now. They're provoking that attention for the simple reason that LLMs have leapt suddenly to a level of competence with language manipulation that until recently seemed very far away... and honestly seemed like the kind of thing that only AGI could do. We now know that AGI is not required for competent language manipulation, but
Not extinction, suicide. (Score:2)
Whether it is climate change or AI, if the human race ends because of its choices, it is mass suicide, not extinction.
AI is no more "extinction" than hanging yourself is murder.
Not later. The risk is imminent. (Score:2)
When you close that loop, AGI and super intelligence go from possible, to inevitable. This is what has so many people scared. If it hasn't already been achieved, it will almost certainly be done before the end of the year.
And the risk isn't disinformation or job loss. The very first thing genuine super intelligence will b
They do know... (Score:3)
It seems more like AI will be used similarly to previous advances in technology: To make a few rich richer at the expense & suffering of everyone else. AI won't be the problem. The people using AI against us will.
Re: (Score:2)
Re:They do know... (Score:5, Interesting)
It seems more like AI will be used similarly to previous advances in technology: To make a few rich richer at the expense & suffering of everyone else. AI won't be the problem. The people using AI against us will.
Considering we're at a point already where the ultra-rich are consolidating all the wealth they can, while the rest of us struggle to find a way to contain just enough wealth we won't starve to death before we die of natural causes, I'd say AI won't be necessary for them to complete the task. It'll just speed the process along.
SEE! Automation makes everything better! They'll starve us out a full generation or two faster with AI!
AGI can't come fast enough (Score:2)
Sure (Score:2)
In, maybe, a hundred years or so when computers are powerful enough to attain some level of sentience. As it stands today, the fastest supercomputer in the world takes about ten minutes to simulate the brain activity of a human for one second, if all that human is doing is moving a single eye muscle.
So we are quite a ways away from Skynet taking over the world.
The problem isn't what they think it is (Score:2)
Feral AI is probably less of a threat than AI in the hands of a tech bro. Ethics free, selfish, arrogant, so-smart-they're-stupid child men. Doctor Frankenstein without the class or altruism.
Re: (Score:2)
If I had a nasty, cynical mind, I might conclude that I may soon have the opportunity to be proactive about this and apply my average-geek-fu as you suggest.
Thanks for the link. I'm going to watch more of this guy.
Same BS, different words (Score:2)
This is the same sort of AI that recently wrote a legal brief citing six cases that don't exist [theverge.com], complete with excerpts (which cite other cases which don't exist), and, when queried directly, insisted they were all real cases? That AI?
The only possible source of data extensive enough to train LLM A"I"s on is the internet, which means, by definition, an outright majority of what it's trained on is complete rubbish. The more people use it, the more they realize this.
A"I" isn't much of a threat to anything, ot
Re: (Score:2)
These LLMs are effectively just predictive typing keyboards on steroids, minus the human typing at all. As scary as the bad data is, there are real people that vomit nonsense like that on their own and believe it because it "sounds" right. So it might not be real intelligence, but there are also people without that real intelligence either. So it's getting somewhere.
Then stop (Score:2)
If this is truly what they believe they should dissolve ClosedAI and BleepMind and go find something else to do. Of course that will never happen. Instead they'll argue for ridiculous regulatory concepts that depend on ability to control / outsmart something that eventually may we;; be much smarter than themselves and to continue to do so forever.
At the beginning of nuclear age we had infamous figures like William Blandy speaking for what the bombs won't do.
Here we have industry cheerleaders launching mas
Not with AI in it's current form (Score:2)
Everything can one day pose an existential threat (Score:2, Interesting)
We need some focus on dealing with the most pressing matters first though. For example, first to not have a US-West nuclear WWIII, then some plan to get emerging bacteria/viruses/fungee under control, then a rational plan for slower but impactful climate change... When chatbots are the most pressing matter, we will see which states they are in then and what to to do about them. In the meantime, the biggest problem with chatbots that can launch nukes will be still the nukes and impact of datacenters on clima
Risks? Who cares. There is MONEY to be made! (Score:3)
As usual, the risks will be ignored and some people of very low quality will get filthy rich. It is quite obvious to me that the human race will continue in this modus until at one time it is too much or the dice fall badly and its pathetic existence will end. As a group, the human race has no effective intelligence.
Obvious grift is obvious (Score:2)
As always when "leaders" are trying to impress "journalists" with scary statements, this is of course utter nonsense. But I'm sure some "experts" are willing to sit in an appropriate commission to address the risk, provided they get paid enough.
Natural stupidity is even greater one (Score:2)
Insanity (Score:2)
The main existential threat to humanity is high IQ idiots who think everything is an existential threat to humanity.
the evolutionary path (Score:3)
A key threat in current generative AI technology is that it doesn't know the difference between truth and lies. The next evolutionary step in the technology will be to make systems that can analyze learned training data for validity, and on the output side detect which generated pattern responses contain falsehoods before emitting them.
However, the ability to come up with a response to inputs is not the same as having the will to carry out achieving goals. The current vector-based mechanisms do not yet have anything like that ability.
The real threat will come later when someone implements Self cores in the AIs, internal cores able to have their own goals and mechanisms to achieve the goals. LLMs lack that, as it is entirely different kinds of functionality.
Are we in danger yet? No, but eventually these systems will be evolved to it. But right now, predictive stat-based LLMs really can't evolve by themselves to that level, as it is an entirely different architecture, and the chatbot architecture is actually pretty rigid and it can't grow itself new architectures.
The danger will come when someone designs self-evolving systems and grafts that with LLM stat systems. The most likely source will not be civilian but military developers. I think probably outside the US and possibly from a small country.
I work on these kinds of systems and I estimate they will show up in the range of 5 years to maybe no more than 20. But Kurzweil is wrong about an exponential leap to a singularity.
Re:You've been watching too many Terminator movies (Score:5, Insightful)
And you don't think deep-faked propaganda-producing AI would make both of those situations worse? You don't think millions of unemployed workers would be willing to follow some deranged leader into war? The threat isn't climate change or war or AI; it's climate change AND war AND AI (and pandemic and economic collapse and mass migration and ...). Each additional threat doesn't just add to the risk; it multiplies it.
Re: (Score:2)
Re: (Score:2)
Do you wanna? I mean, I have a lazy afternoon, a kick-ass propaganda AI and am generally a pretty big misanthrope myself, so...
Re: (Score:2)
"Quite a while" being a few years. Small thinking - that's probably the minimum timeframe it takes to build out networks of receptive listeners. Maximum effectiveness is going to be after a lifetime or generations of pounding peoples heads with it. ("My daddy always told me..." "It's always been this way")
Not that nuclear holocaust is even the goal of a propagandist. Subjugating a population without irradiating everything works out much better for them.
Re: (Score:2)
Not for lack of trying though.
Re: (Score:2)
I feel like it’s low but once someone depends on it for money it will be very hard to get things right in time to avoid disasters as they materialize on the horizon.
Re: (Score:2)
More like misdirection. Saturate people with predictions that LLMs will become Terminator and you obscure the very real but subtle (to the average Joe) ways that AI will be used to suck him dry and control him.
Your propaganda for 2040 is already pre-written. "What? Regulate AI? What are you, one of those doomsday prophets saying we would have the Termintor?"
Re: (Score:2)
Plus, since these Chicken Littles are also the ones who think global warming is an existential threat to humanity, the very best thing for the planet would indeed be the elimination of humanity.
Why is the persistence of humanity considered an incontrovertible good?