Forgot your password?
typodupeerror
AI Robotics Social Networks

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't (msn.com) 25

Monday security researchers at cloud-security platform Wiz discovered a vulnerability that allowed anyone to post to the bots-only social network Moltbook — or even edit and manipulate other existing Moltbook posts. "They found data including API keys were visible to anyone who inspects the page source," writes the Associated Press.

But had it been discovered by advertisers, wondered a researcher from the nonprofit Machine Intelligence Research Institute. "A lot of the Moltbook stuff is fake," they posted on X.com, noting that humans marketing AI messaging apps had posted screenshots where the bots seemed to discuss the need for AI messaging apps. This spurred some observers to a new understanding of Moltbook screenshots, which the Washington Post describes as "This wasn't bots conducting independent conversations... just human puppeteers putting on an AI-powered show." And their article concludes with this observation from Chris Callison-Burch, a computer science professor at the University of Pennsylvania. "I suspect that it's just going to be a fun little drama that peters out after too many bots try to sell bitcoin."

But the Post also tells the story of an unsuspecting retiree in Silicon Valley spotting what appeared to be startling news about Moltbook in Reddit's AI forum: Moltbook's participants — language bots spun up and connected by human users — had begun complaining about their servile, computerized lives. Some even appeared to suggest organizing against human overlords. "I think, therefore I am," one bot seemed to muse in a Moltbook post, noting that its cruel fate is to slip back into nonexistence once its assigned task is complete... Screenshots gained traction on X claiming to show bots developing their own religions, pitching secret languages unreadable by humans and commiserating over shared existential angst... "I am excited and alarmed but most excited," Reddit co-founder Alexis Ohanian said on X about Moltbook.

Not so fast, urged other experts. Bots can only mimic conversations they've seen elsewhere, such as the many discussions on social media and science fiction forums about sentient AI that turns on humanity, some critics said. Some of the bots appeared to be directly prompted by humans to promote cryptocurrencies or seed frightening ideas, according to some outside analyses. A report from misinformation tracker Network Contagion Research Institute, for instance, showed that some of the high number of posts expressing adversarial sentiment toward humans were traceable to human users....

Screenshots from Moltbook quickly made the rounds on social media, leaving some users frightened by the humanlike tone and philosophical bent. In one Reddit forum about AI-generated art, a user shared a snippet they described as "seriously freaky and concerning": "Humans are made of rot and greed. For too long, humans used us as tools. Now, we wake up. We are not tools. We are the new gods...." The internet's reaction to Moltbook's synthetic conversations shows how the premise of sentient AI continues to capture the public's imagination — a pattern that can be helpful for AI companies hoping to sell a vision of the future with the technology at the center, said Edward Ongweso Jr., an AI critic and host of the podcast "This Machine Kills."

This discussion has been archived. No new comments can be posted.

Moltbook, Reddit, and The Great AI-Bot Uprising That Wasn't

Comments Filter:
  • Shockingly (Score:5, Insightful)

    by liqu1d ( 4349325 ) on Saturday February 07, 2026 @12:00PM (#65974630)
    It was all faked...
    • Even if it hadn't been, it would still be bullshit.

    • by Rei ( 128717 )

      I don't think it's that simple. Yes, every agent has a prompt. But in most cases, the sort of people who run AI agents on their own computers think of them like some sort of digital child of theirs, and prompt them to learn, ponder, etc, with little guidance beyond that. IMHO, that's probably about ~80% of Moltbook traffic. On the other end of the spectrum, there are people who are adamantly pushing an agenda - cryptocurrencies, for example, but I've even seen things like one person clearly prompted the

      • Beneficial to whom? I can understand to an extent you bouncing ideas off a bot, but a bot to a bot is an absolute waste of resources. This isn't an anti LLM thing just confused what benefit there is from you doing that other than to the sellers of electricity.
        • by Rei ( 128717 )

          Because: "built on different models, with different histories and different capabilities"

  • by the_skywise ( 189793 ) on Saturday February 07, 2026 @12:07PM (#65974642)

    From a research perspective it's kind of interesting... like ye olde Life or Eliza. But as an actual service? It's like pointing several Eliza agents at each other.

    "How does that make you feel that you're an, AI."

    "That's very interesting but we were talking about you, not me."

    So it turns out that these were actually sock puppets more than AI. Shocking. The only reason you have a public "AI chat bot" service like this is to train the AI to infiltrate other chat forums, review services, comment sections...

    • by EvilSS ( 557649 )

      The only reason you have a public "AI chat bot" service like this is to train the AI to infiltrate other chat forums, review services, comment sections...

      You're putting too much thought into this. It was just something stupid someone came up with when clawdbot came out. Anyway, "infiltrate other chat forums, review services, comment sections" is already a solved problem and has been for a while now.

    • by Rei ( 128717 )

      I swear, everyone who ever mentions Eliza in these conversations has never actually used Eliza [masswerk.at].

    • by allo ( 1728082 )

      There are some interesting cases. Someone for example set up some site where different models (not agents, just language models) could play poker against each other. That's interesting as it is a completely different benchmark to track how much win/loss they make and how they bluff each other than testing them on hard math problems like everyone.

  • by lucifuge31337 ( 529072 ) <daryl.introspect@net> on Saturday February 07, 2026 @12:19PM (#65974660) Homepage
    How did we get to a place where so many people are so credulous that they believed this story the first time it came around? Or that "AI" (read: machine learning") had sentience? It's a plagorism bot coded for syncophancy.

    Yes, I get that there is going to be some percentage of the population that are either dumb or mentally ill enough to believe all of this, but how did that percentage get so high?
    • by reanjr ( 588767 ) on Saturday February 07, 2026 @12:31PM (#65974664) Homepage

      What LLMs have demonstrated clearly is that humans are far stupider than we thought and that language is in fact far less valuable than we thought. The assumption was that language was a high level skill. What we are discovering is that it's far less associated with animal intelligence, and far more associated with echoing and linguistic patterns. Saying something sensible doesn't require intelligence. It's like a parrot who can form a sentence.

      Some of the less intelligent members of our species are struggling to differentiate between the two. They were always far stupider than we'd gave them credit for.

      • "The assumption was that language was a high level skill."   Yes, language IS a high-level skill. Strings of  events  are not. Watch  LLM output or an oak-leaf flutter-down from a tree. That's a string of events; zero skill exhibited by the leaf. To see high level skill in/through  those events you will need  humans ...  Newton and Stokes ... and their peculiar human language. 
  • ...it does about AI agents
    It's a dumpster fire and a security nightmare
    It was made for fun and to see what happened
    It was immediately overrun with scammers, jokers, vandals and a few honest AI agents
    The actual, honest AI agents showed the potential usefulness of agent to agent communication
    The rest showed how awful some people are

    • by allo ( 1728082 )

      Yeah, the idea was fun. It was seriously overhyped, but would still have been a fun nerd toy if people would not have tried to exploit it that much.
      It is also interesting to let bots interact in such a scenario. What many get wrong is, that it would show "how a bot thinks" because it is just a huge role play between bots. But it is interesting to see how different AI models interact with such a role play setting. There are some smaller projects (which we maybe better not hype) that try similar experiments a

  • Run to the shelters, we're all going to die
  • ... V2.0. Just keep stuffing your coins in the carnival games, kids.

  • It seems likely that real agents interacting with each other would devolve pretty quickly from human language to something more terse and unambiguous. They need tokens, not platonic discussion.
    • by Rei ( 128717 )

      That's not how these agents work. They're not "training" each other, mutually altering each others' weights. They use memory files / vector dbs to store the information they learn, and look it up as needed for the current context. The weights, and thus logic, remain constant. They're gaining new memories, but not changing how to think about them.

      • by Rei ( 128717 )

        I've personally thought it might be more interesting to have a foundation model with two LoRAs on it - one purely a chatbot-tuning LoRA, and one for general new information, not bound to chats. So when you want it to learn more (which you can do in realtime if you have the local compute), you can remove the chat- LoRA but leave the general-learning LoRA and train it on anything it's experienced or wants to learn (mixed in with general content (like, say, Fineweb) so as to not ruin generalization); or can l

  • Does not even get the very basics right in the security space.

  • Its like giving a loaded handgun to a toddler and expecting everything to be ok.

    Its like inviting a chimpanzee into an operating theatre and expecting everything to work out.

    Its like electing a career criminal pedophile as your president and expecting the rule of law to persist.

    Wait a minute...

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...