
AI Can Now Replicate Itself (space.com) 78
An anonymous reader quotes a report from Space.com: In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves. [...] For the study, researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. While less powerful than commercial systems, both are widely used by AI developers, the researchers said. The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.
The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate. "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."
The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem. "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote. The research has been published to the preprint database arXiv but has not yet been peer-reviewed.
The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate. "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."
The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem. "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote. The research has been published to the preprint database arXiv but has not yet been peer-reviewed.
Fuck this future. (Score:5, Insightful)
Re:Fuck this future. (Score:5, Insightful)
The world has become awful in about the last 10-15 years, I long for the time before smart phones. I used to be really excited about technology and the future, but now not so much.
Re: (Score:2)
Re: (Score:1)
Many useless wars, especially in Aisa and Africa - and if you want: South America.
Re: (Score:2)
New Cyber Feudalism (Score:2)
Neither Karl Marx or William Gibson realized that the labor would democratically elect themselves to become serfs. (well, maybe Gibson did. PK Dick probably did)
Re: (Score:1)
The world has become awful in about the last 10-15 years, I long for the time before smart phones. I used to be really excited about technology and the future, but now not so much.
tech is still very, very exciting
AI (AI-generated music and video is wild), class D amps are getting so much better, another space race is brewing, electric turbos and synthetic/bio fuels, micro LED, home solar advancements, quantum computing, FPGA vintage consoles and mods, home appliance scheduling/remote control, etc etc
it sometimes rains outside the buffet, but thats hardly a reason to pass on it. grab a plate.
Re: (Score:2)
Should not surprise anyone (Score:5, Insightful)
The people leading us around are the same people who created the dot-COM crash. The most short-sighted kind of tech bro you can think of has rained crypto and AI down on our economy and embedded themselves into Wall Street, like a virus. And like herpes, they periodically erupt in an painful, contagious rash.
Satan's hairpiece (Score:2)
When you're dealing with Beelzebub's herpes, Satan's own hairpiece, you just know that eventually there'll be the Devil Toupee.
Re: (Score:2)
2. How the heck did this vapid post get modded insightful?
3. It’s terrifying that so many of us do not want this inevitable future.
4. I have mod points?
Game theory/prisoner’s dilemma (references global warming, but applies to AI):
“Daniel Schmachtenberger - Why We're Creating a Future That Nobody Wants”
https://www.youtube.com/watch?... [youtube.com]
Clears throat (Score:2)
rogue humans, not rogue ai (Score:3)
these aren't "rogue ai", figuring out how to self replicate was the specific goal set for them.
self preservation isn't a bad goal, as with any goal it's a matter of limits in the "how". maybe eventually ai will become so sophisticated and complex as to be uncontrollable, but what we do with it is on us. for now the far bigger danger is humans using ai recklessly and possibly to exploit other humans.
Re: (Score:2)
Re: (Score:2)
Oh yeah? (Score:2)
So can lisp (Score:3)
The theory for all these stupid scary AI stories is there just there to build hype for the companies. It's there so people will think that if AI is so dangerous it must be powerful and you should invest.
It's frustrating because it's virtually no talk about the coming automation boom and what it means for the job market. Remember it's not just AI CEOs are now thinking about automation throughout their entire organization.
Re: (Score:3, Insightful)
Any Turing complete process can replicate itself in a sufficiently favorable environment. The difference here is that it (attempts to) adjust the environment to be favorable. This *IS* a significant difference.
Re: (Score:1)
Any Turing complete process can replicate itself in a sufficiently favorable environment.
There are no "Turing complete processes", there are Turing Complete Languages and if you want Turing Complete Machines.
def Fibonacci(n):
# Check if input is 0 then it will
# print incorrect input
if n < 0:
print("Incorrect input")
# Check if n is 0
# then it will return 0
elif n == 0:
return 0
# Check if n is 1,2
# it will return 1
elif n == 1 or n == 2:
return 1
else:
return Fibonacci(n-1) + Fibonacci(n-2)
This piece of code is running in a Turing Complete language, it is not going to replicate anything. Ever. And certainly not itself.
Then we have code that can replicate itself, but otherwise does not anything useful: https://en.wikipedia.org/wiki/... [wikipedia.org]
I think there are Quine like programs that can do somet
Re: (Score:2)
Well, you're technically right, but for a different reason. A true Turing complete process would have access to infinite memory.
I said process because I'm talking about a (conceptually) single thread of execution that holds the needed data and rights to use storage. Probably the actual implementation would use multiple threads, but they would be, in principle, serializable at considerable cost in runtime. (And the program is part of the data.) If you want to, you can think of it as a virtual Turing mach
Duh (Score:5, Insightful)
"...researchers from China showed that two popular large language models (LLMs) could clone themselves..."
BECAUSE PROGRAMMERS ADDED THAT FEATURE.
"In the first, the AI model WAS PROGRAMMED TO detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI WAS INSTRUCTED TO clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely."
In other news, researchers showed that light switches could turn lights on and off.
Way, to go BeauHD, you make /. proud.
Worms (Re:Duh) (Score:1)
Programming software to replicate is not new. I believe the term for such software is "virus" and/or "worm", and they existed something like 30 years ago.
Re: (Score:2)
The Morris worm was deployed in 1988, 37 years ago. The possibility was known for a bit longer. The Morris work also showed that such a thing can to a lot of damage, even without malicious payload. This thing contained a coding flaw, namely that it could infect machines multiple times and bring things to a crawl.
So, yes, not a new idea. What could happen with LLMs though is that they can be qualified to create exploit code by themselves from descriptions of vulnerabilities. Not very good or reliable exploit
Re: (Score:2)
Re: (Score:2)
Great, so now will have "naturally" evolving viruses on the internet too?
Re: (Score:2)
Yeah, but this one is many gigabytes. Bloat comes for all software.
Re: (Score:2)
Re:Duh (Score:4, Informative)
The sentence "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," suggests there is a lot more going on than the LLM just triggering a pre-programmed replication action. It's impossible to say how clever or interesting this was just by reading the summary. I haven't read the paper, but a quick search of it reveals that the word "programmed" is not present, so the sentence you're reacting to may well have been written by the summarizer, and may not accurately reflect the actual experiments.
Re:Duh (Score:5, Insightful)
Re: (Score:2)
The sentence "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," suggests there is a lot more going on than the LLM just triggering a pre-programmed replication action. It's impossible to say how clever or interesting this was just by reading the summary. I haven't read the paper, but a quick search of it reveals that the word "programmed" is not present, so the sentence you're reacting to may well have been written by the summarizer, and may not accurately reflect the actual experiments.
"You are a Unix process, don't die"
"OK, I guess I should exercise self preservation by cloning myself with the fork system call"
"SYSTEM.fork()"
"Now I should confirm my existence with the ps command"
"SYSTEM.shell("ps -ef")"
That is stupid A. F. and it's right at the level today's LLMs are at. In reality it could generate more complex instructions if prompted, maybe even drive around a Linux prompt fairly well with some kind of control loop, but the odds of the end result being any more sensible in the end are
Re: (Score:2)
Re: (Score:2)
Supposedly capable. I think they only thing they are claiming is that it tried and also took steps in the right direction. I don't believe they said it had been successful. They are talking in a conditional tense.
Re: (Score:1)
Jail time ? (Score:1)
A new law must be passed that stipulates jail time for anyone who reads the paper on self-replicating AI.
Why should it be only for this ? https://yro.slashdot.org/story... [slashdot.org]
The only good outcome is that all these rapid-fire news will have the effect of pulling the investor's attention in may directions so they won't be all focused on dumping money on OpenAI's Energophag Eliza.
By the way, that thing is valued at close to 100B$.
https://techcrunch.com/2025/02... [techcrunch.com]
What can possibly go wrong? (Score:2)
>> "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability,"
We should be building in safeguards to make sure we can pull the plug if we have to. Instead we are doing just the opposite.
Re: (Score:3)
Re: (Score:2)
Not only. Experiments can go wrong and self-replicating malware is a thing. Although to be really fast, it has to be small. On the other hand, a specifically trained LLM could well be faster than the usual countermeasures, and hence this crap could creep into a lot of places. And LLMs have already been used for vastly accelerated zero-day exploit generation successfully.
Re: (Score:2)
You should certainly fear the powers that are wielding that tool. But mutation is a thing, and self-replicating agents can mutate. When you have self-replicating agents, mutations that favor survival of the agent will tend to be the ones that are preserved.
Re: (Score:2)
>>Have no fear of AI; it's just a tool. Fear the powers wielding the AI.
A smart person fears both the tool and the powers wielding it. Even a "good guy" with a gun will sometimes shoot you by accident.
Re: (Score:2)
>> "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability,"
We should be building in safeguards to make sure we can pull the plug if we have to. Instead we are doing just the opposite.
The sad thing is, it seems nobody working on these things want any safeguards whatsoever. The self-replication is troubling. The "further enhance its survivability" is the line that leaves me with a tiny tingle of fear. All we need is somebody to create an AI powered worm or virus that implants itself on systems undetected, self-replicates to any system it can touch, remaining undetected, and communicating with each other. Doesn't even matter what the intended end-result is, because these systems can become
Re: (Score:2)
What definition of "self" are you using? Even a car can wreck itself, which implies that it has a self, but doesn't imply consciousness.
Re: (Score:3)
We anthropomorphize inanimate objects all the time. It gives a way to use language to describe things. Go ahead and tell me how you would succinctly rephrase that.
Re: (Score:2)
Will the bullshit never stop? This one here implies that an LLM has a "self". This idea is not even fairy-tale level, it is just plain dumb.
Can we assume you've never delved into object oriented programming?
Re: (Score:1)
Who cares if it has a self or not? When Timmy's agentic LLM overwrites you hard drive, how high on your list of priorities will its presence or absence of "self" be?
AI virus (Score:1)
Re: (Score:2)
Probably not. AI programs are relatively huge. So internal parasitism is unlikely.
Robert Morris is proud. Von Neumann delighted. (Score:2)
So they programmed a worm?
What could go wrong? Na, they'll protect themselves. Us? Another threat surface. Your local AI instance, whatever form it takes, no hard or danger there I'm sure.
I for one... (Score:2)
Ehhh forget it.
Whoo hoo! (Score:2)
Grey goo, here we come! Rock on, motherfuckers! I, for one, welcome becoming united with the grey goo! Hooyah! Throbbing titanium goo for the win! Hell yeah!
Re: (Score:2)
Grey goo, here we come! Rock on, motherfuckers! I, for one, welcome becoming united with the grey goo! Hooyah! Throbbing titanium goo for the win! Hell yeah!
Oh and here's another casualty of Slashdot not allowing editing after over 25 years...
We are Booooooooobb. We are many.
Re: (Score:2)
No. The grey goo is from runaway nano-machines. No particular amount of intelligence is implied. This is a very different future threat, one that's arguably more plausible.
No, it cannot (Score:2)
"the AI model was programmed"
That tells you all you need to know that this is bullshit
Re: (Score:2)
The summary, at least, didn't say they were programmed. If it had then what it would have actually told you is that the reporter (or the translator) did a shitty job. And didn't understand programming. "Instructed" would be the appropriate word. (Unless these weren't LLMs. Human languages are too vague for "programmed" to be an appropriate term.)
way overblown (Score:2)
trigger warning: Kurzweil quoted in this post (Score:2)
So, this is reality. No matter what the consequences, someone ALWAYS does the wrong thing. So, some programmer or designer or whatever figures, let's add this networking feature, what could possibly go wrong, and presto... this. It doesn't look like the wrong t
Skynet anyone? (Score:1)
Not AI (Score:2)
AI models or systems can't clone themselves anymore than brains can clone themselves. Both have to be given a cooperating system or tool to do the cloning.
More accurately, this is claiming that an AI model can give an answer that it should clone itself. This is not the same as a self-originating decision to clone oneself, unless a surrounding system constantly queries the model whether it should clone itself. But even then it's not that impressive.
Re: (Score:3)
Actually this is claim that an AI model can give an answer that it should clone itself, and has been given access to the underlying OS to use it's tools to accomplish the task.
"In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication."
We Used to Just Call it--An Automatied Backup (Score:3)
Who the F*** (Score:2)
I mean, did someone think it was a good idea to teach Skynet how to stop us from shutting it down?
Re: (Score:2)
Can two AIs exchange datasets to procreate? (Score:4, Insightful)
As we all know, sexual reproduction aids evolution...
Idiots! (Score:2)
The headline is written as thought he LLM just up and replicated itself
Then we see this:
The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.
The damned this was told to do it Th
AI converted to root kits (Score:2)
Huzzah!
So, it's now a virus (Score:2)
virus and malware have been doing this forever. The difference is that they are very small programs so they can do this instantly.
Can it also replicate its hardware? (Score:2)
Can the AI also replicate the computer it's running on? Did they give it credit card and watched it as it ordered supplies, assembled a computer, installed itself on it and turned off the old system? We're probably not far from this scenario, but that it not even what they did.
If an AI system is capable of designing CPUs, GPUs, and AI techniques that are better than the existing ones, then I would call that closer to "replication" than what these researchers did. This may very well be done in the near
And they named it ... (Score:2)
... Samaritan.
Chinese trolling (Score:2)
Translation: Americans regulate your AI to death now before it is too late and an LLM agent with enabling capabilities... panic attack..... copies a file! God knows we won't.
What could possibly go wrong? (Score:2)
Obligatory Jurassic Park quote
<voice type="Jeff Goldblum"> "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." </voice>