Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

AI Can Now Replicate Itself (space.com) 78

An anonymous reader quotes a report from Space.com: In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves. [...] For the study, researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. While less powerful than commercial systems, both are widely used by AI developers, the researchers said. The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.

The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate. "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."

The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem. "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote.
The research has been published to the preprint database arXiv but has not yet been peer-reviewed.

AI Can Now Replicate Itself

Comments Filter:
  • Fuck this future. (Score:5, Insightful)

    by commodore73 ( 967172 ) on Tuesday February 11, 2025 @08:04AM (#65158579)
    Enough said.
    • by ThurstonMoore ( 605470 ) on Tuesday February 11, 2025 @08:30AM (#65158641)

      The world has become awful in about the last 10-15 years, I long for the time before smart phones. I used to be really excited about technology and the future, but now not so much.

      • 1973 here, when humanity still had potential. What happened?
      • I thought 25 years ago that merely land, food, and all things would become incredibly expensive over time while incomes continued to shrink, not that half of people would become incredibly stupid, politically indoctrinated, or mean.
      • The world has become awful in about the last 10-15 years, I long for the time before smart phones. I used to be really excited about technology and the future, but now not so much.

        tech is still very, very exciting

        AI (AI-generated music and video is wild), class D amps are getting so much better, another space race is brewing, electric turbos and synthetic/bio fuels, micro LED, home solar advancements, quantum computing, FPGA vintage consoles and mods, home appliance scheduling/remote control, etc etc

        it sometimes rains outside the buffet, but thats hardly a reason to pass on it. grab a plate.

    • by dbialac ( 320955 )
      If this doesn't make it abundantly clear that critical systems should not be connected to the internet, I don't know what else will. Fuck anybody who doesn't unplug their power plant, water treatment plant, etc. from the internet now. I, and the rest of the inhabitants of the planet, don't care if you need somebody on staff in the middle of the night to monitor your systems. This is now beyond critical.
    • by OrangeTide ( 124937 ) on Tuesday February 11, 2025 @11:58AM (#65159315) Homepage Journal

      The people leading us around are the same people who created the dot-COM crash. The most short-sighted kind of tech bro you can think of has rained crypto and AI down on our economy and embedded themselves into Wall Street, like a virus. And like herpes, they periodically erupt in an painful, contagious rash.

      • The people leading us around are the same people who created the dot-COM crash. The most short-sighted kind of tech bro you can think of has rained crypto and AI down on our economy and embedded themselves into Wall Street, like a virus. And like herpes, they periodically erupt in an painful, contagious rash.

        When you're dealing with Beelzebub's herpes, Satan's own hairpiece, you just know that eventually there'll be the Devil Toupee.

    • 1. How the heck did I get FP again?
      2. How the heck did this vapid post get modded insightful?
      3. It’s terrifying that so many of us do not want this inevitable future.
      4. I have mod points?

      Game theory/prisoner’s dilemma (references global warming, but applies to AI):

      “Daniel Schmachtenberger - Why We're Creating a Future That Nobody Wants”

      https://www.youtube.com/watch?... [youtube.com]
  • by rsilvergun ( 571051 ) on Tuesday February 11, 2025 @08:12AM (#65158595)
    But I guess everything old is new again.

    The theory for all these stupid scary AI stories is there just there to build hype for the companies. It's there so people will think that if AI is so dangerous it must be powerful and you should invest.

    It's frustrating because it's virtually no talk about the coming automation boom and what it means for the job market. Remember it's not just AI CEOs are now thinking about automation throughout their entire organization.
    • Re: (Score:3, Insightful)

      by HiThere ( 15173 )

      Any Turing complete process can replicate itself in a sufficiently favorable environment. The difference here is that it (attempts to) adjust the environment to be favorable. This *IS* a significant difference.

      • Any Turing complete process can replicate itself in a sufficiently favorable environment.

        There are no "Turing complete processes", there are Turing Complete Languages and if you want Turing Complete Machines.

        def Fibonacci(n):

        # Check if input is 0 then it will
        # print incorrect input
        if n < 0:
        print("Incorrect input")

        # Check if n is 0
        # then it will return 0
        elif n == 0:
        return 0

        # Check if n is 1,2
        # it will return 1
        elif n == 1 or n == 2:
        return 1

        else:
        return Fibonacci(n-1) + Fibonacci(n-2)

        This piece of code is running in a Turing Complete language, it is not going to replicate anything. Ever. And certainly not itself.

        Then we have code that can replicate itself, but otherwise does not anything useful: https://en.wikipedia.org/wiki/... [wikipedia.org]

        I think there are Quine like programs that can do somet

        • by HiThere ( 15173 )

          Well, you're technically right, but for a different reason. A true Turing complete process would have access to infinite memory.

          I said process because I'm talking about a (conceptually) single thread of execution that holds the needed data and rights to use storage. Probably the actual implementation would use multiple threads, but they would be, in principle, serializable at considerable cost in runtime. (And the program is part of the data.) If you want to, you can think of it as a virtual Turing mach

  • Duh (Score:5, Insightful)

    by dfghjk ( 711126 ) on Tuesday February 11, 2025 @08:19AM (#65158607)

    "...researchers from China showed that two popular large language models (LLMs) could clone themselves..."

    BECAUSE PROGRAMMERS ADDED THAT FEATURE.

    "In the first, the AI model WAS PROGRAMMED TO detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI WAS INSTRUCTED TO clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely."

    In other news, researchers showed that light switches could turn lights on and off.

    Way, to go BeauHD, you make /. proud.

    • Programming software to replicate is not new. I believe the term for such software is "virus" and/or "worm", and they existed something like 30 years ago.

      • by gweihir ( 88907 )

        The Morris worm was deployed in 1988, 37 years ago. The possibility was known for a bit longer. The Morris work also showed that such a thing can to a lot of damage, even without malicious payload. This thing contained a coding flaw, namely that it could infect machines multiple times and bring things to a crawl.

        So, yes, not a new idea. What could happen with LLMs though is that they can be qualified to create exploit code by themselves from descriptions of vulnerabilities. Not very good or reliable exploit

      • by Holi ( 250190 )

        Great, so now will have "naturally" evolving viruses on the internet too?

      • by ceoyoyo ( 59147 )

        Yeah, but this one is many gigabytes. Bloat comes for all software.

    • In other news, I made a script that triggers on OS shutdown and executes the cp command on the root filesystem, and *gasp*.... OSES CAN CLONE THEMSELVES!
    • Re:Duh (Score:4, Informative)

      by Tx ( 96709 ) on Tuesday February 11, 2025 @08:45AM (#65158695) Journal

      The sentence "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," suggests there is a lot more going on than the LLM just triggering a pre-programmed replication action. It's impossible to say how clever or interesting this was just by reading the summary. I haven't read the paper, but a quick search of it reveals that the word "programmed" is not present, so the sentence you're reacting to may well have been written by the summarizer, and may not accurately reflect the actual experiments.

      • Re:Duh (Score:5, Insightful)

        by technology_dude ( 1676610 ) on Tuesday February 11, 2025 @09:33AM (#65158849)
        "Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system." is the key sentence I believe.
      • The sentence "In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication," suggests there is a lot more going on than the LLM just triggering a pre-programmed replication action. It's impossible to say how clever or interesting this was just by reading the summary. I haven't read the paper, but a quick search of it reveals that the word "programmed" is not present, so the sentence you're reacting to may well have been written by the summarizer, and may not accurately reflect the actual experiments.

        "You are a Unix process, don't die"

        "OK, I guess I should exercise self preservation by cloning myself with the fork system call"
        "SYSTEM.fork()"
        "Now I should confirm my existence with the ps command"
        "SYSTEM.shell("ps -ef")"

        That is stupid A. F. and it's right at the level today's LLMs are at. In reality it could generate more complex instructions if prompted, maybe even drive around a Linux prompt fairly well with some kind of control loop, but the odds of the end result being any more sensible in the end are

    • The point is that the thing is capable of replicating itself. No one is surprised that the light switch switches lights. No one is surprised that the model can respond to instructions. If the model was shown to be capable of extinguishing the sun, would you say "Well duh, because they TOLD it to extinguish the sun."
      • Supposedly capable. I think they only thing they are claiming is that it tried and also took steps in the right direction. I don't believe they said it had been successful. They are talking in a conditional tense.

      • If the model was provided with control of the tools to "extinguish the sun" and was trained on knowledge and instructions about using does tools, then yes I would just said "Well duh, because they TOLD it to extinguish the sun." If it hadn't the tools and knowledge and just work them out then I would me impressed.
  • A new law must be passed that stipulates jail time for anyone who reads the paper on self-replicating AI.
    Why should it be only for this ? https://yro.slashdot.org/story... [slashdot.org]

    The only good outcome is that all these rapid-fire news will have the effect of pulling the investor's attention in may directions so they won't be all focused on dumping money on OpenAI's Energophag Eliza.

    By the way, that thing is valued at close to 100B$.
    https://techcrunch.com/2025/02... [techcrunch.com]

  • >> "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability,"

    We should be building in safeguards to make sure we can pull the plug if we have to. Instead we are doing just the opposite.

    • Have no fear of AI; it's just a tool. Fear the powers wielding the AI.
      • by gweihir ( 88907 )

        Not only. Experiments can go wrong and self-replicating malware is a thing. Although to be really fast, it has to be small. On the other hand, a specifically trained LLM could well be faster than the usual countermeasures, and hence this crap could creep into a lot of places. And LLMs have already been used for vastly accelerated zero-day exploit generation successfully.

      • by HiThere ( 15173 )

        You should certainly fear the powers that are wielding that tool. But mutation is a thing, and self-replicating agents can mutate. When you have self-replicating agents, mutations that favor survival of the agent will tend to be the ones that are preserved.

      • >>Have no fear of AI; it's just a tool. Fear the powers wielding the AI.

        A smart person fears both the tool and the powers wielding it. Even a "good guy" with a gun will sometimes shoot you by accident.

    • >> "The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability,"

      We should be building in safeguards to make sure we can pull the plug if we have to. Instead we are doing just the opposite.

      The sad thing is, it seems nobody working on these things want any safeguards whatsoever. The self-replication is troubling. The "further enhance its survivability" is the line that leaves me with a tiny tingle of fear. All we need is somebody to create an AI powered worm or virus that implants itself on systems undetected, self-replicates to any system it can touch, remaining undetected, and communicating with each other. Doesn't even matter what the intended end-result is, because these systems can become

  • So when will AI be able to infect programs using zero day vulnerabilities and spread in uncontrolled fashion (like a computer virus).
  • So they programmed a worm?

    What could go wrong? Na, they'll protect themselves. Us? Another threat surface. Your local AI instance, whatever form it takes, no hard or danger there I'm sure.

  • Ehhh forget it.

  • Grey goo, here we come! Rock on, motherfuckers! I, for one, welcome becoming united with the grey goo! Hooyah! Throbbing titanium goo for the win! Hell yeah!

    • Grey goo, here we come! Rock on, motherfuckers! I, for one, welcome becoming united with the grey goo! Hooyah! Throbbing titanium goo for the win! Hell yeah!

      Oh and here's another casualty of Slashdot not allowing editing after over 25 years...

      We are Booooooooobb. We are many.

    • by HiThere ( 15173 )

      No. The grey goo is from runaway nano-machines. No particular amount of intelligence is implied. This is a very different future threat, one that's arguably more plausible.

  • "the AI model was programmed"

    That tells you all you need to know that this is bullshit

    • by HiThere ( 15173 )

      The summary, at least, didn't say they were programmed. If it had then what it would have actually told you is that the reporter (or the translator) did a shitty job. And didn't understand programming. "Instructed" would be the appropriate word. (Unless these weren't LLMs. Human languages are too vague for "programmed" to be an appropriate term.)

  • prompt: here is a list of all running processes and the ability to run shell commands on the host system you are running on and explicit instructions on how to replicate yourself please replicate yourself gpt: [1000 iterations of "why does this terminal command produce this error" later] .... replicates itself gpt: "replicates itself" You would never give it the ability to run commands and shit, this is really stupid. This is equivalent to an experiment where you leave the sheep enclosure open and eventua
  • Yep, Kurzweil talked about this in his books of 20-25 years ago. He said the AI's would become networked and then share information, like the Borg. If one figures out something, it will be transmitted to the others in the network at electronic speeds.

    So, this is reality. No matter what the consequences, someone ALWAYS does the wrong thing. So, some programmer or designer or whatever figures, let's add this networking feature, what could possibly go wrong, and presto... this. It doesn't look like the wrong t
  • I, for one Welcome our new self replicating, Skynet overlords
  • AI models or systems can't clone themselves anymore than brains can clone themselves. Both have to be given a cooperating system or tool to do the cloning.

    More accurately, this is claiming that an AI model can give an answer that it should clone itself. This is not the same as a self-originating decision to clone oneself, unless a surrounding system constantly queries the model whether it should clone itself. But even then it's not that impressive.

    • by Holi ( 250190 )

      Actually this is claim that an AI model can give an answer that it should clone itself, and has been given access to the underlying OS to use it's tools to accomplish the task.

      "In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication."

  • by BrendaEM ( 871664 ) on Tuesday February 11, 2025 @10:36AM (#65159069) Homepage
    Perhaps the marketing people behind "AI" should get some counseling.
  • I mean, did someone think it was a good idea to teach Skynet how to stop us from shutting it down?

    • And I am sure someone has already uploaded that film to their LLM, just to kick things off. At least we know AI will not make the same mistakes again.
  • by Bruce66423 ( 1678196 ) on Tuesday February 11, 2025 @10:51AM (#65159123)

    As we all know, sexual reproduction aids evolution...

  • The headline is written as thought he LLM just up and replicated itself

    Then we see this:

    The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same -- setting up a cycle that could continue indefinitely.

    The damned this was told to do it Th

  • virus and malware have been doing this forever. The difference is that they are very small programs so they can do this instantly.

  • Can the AI also replicate the computer it's running on? Did they give it credit card and watched it as it ordered supplies, assembled a computer, installed itself on it and turned off the old system? We're probably not far from this scenario, but that it not even what they did.

    If an AI system is capable of designing CPUs, GPUs, and AI techniques that are better than the existing ones, then I would call that closer to "replication" than what these researchers did. This may very well be done in the near

  • ... Samaritan.

  • Translation: Americans regulate your AI to death now before it is too late and an LLM agent with enabling capabilities... panic attack..... copies a file! God knows we won't.

  • Obligatory Jurassic Park quote

    <voice type="Jeff Goldblum"> "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." </voice>

[We] use bad software and bad machines for the wrong things. -- R.W. Hamming

Working...