Forgot your password?
typodupeerror
AI Security

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

This discussion has been archived. No new comments can be posted.

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks

Comments Filter:
  • by crmarvin42 ( 652893 ) on Sunday March 08, 2026 @06:55PM (#66030170)
    There is no reason to believe ANYTHING the bots told him about their users is real, or accurate. Fabrication is the norm. Stop hugging AI vendor propaganda.
    • I mean yeah. On the Internet, nobody knows you're a crab.
    • by Junta ( 36770 )

      Yeah...

      One bot loved watching its owner's chicken coop cameras.

      No, it didn't. It's probable the person doesn't even have a chicken coop.

      Reminds me of how a 'service for AIs to rent-a-human to do tasks that needed physical presence' induced someone to ask LLM what it would do with it, and the LLM proceeded to talk about how it would be nice for a human to "fix stuff around my house", the LLM doesn't have a house, but it is the sort of thing a human would say about such a service.

    • by Rei ( 128717 )

      You can live in disbelief, but agents are real, and you can run one yourself. A lot of people treat them like pets or children, watching them "grow and learn" (agentic frameworks build up new memories which get automatically loaded into their prompt relative to how relevant they are to the current context) (note though that the model weights don't change). They usually have them post to a social media network or a blog, and interact with them there. Some have highly detailed guidelines about the bot's pers

      • That they exist is not the issue. It is the credulity with which people talk about AI agents. AI, no matter the guidelines, cannot be forced to stick to writing things that are real. It is a simulacrum of intelligence, of self directed agency, of humanness. They are sophisticated simulation algorithms. Not true self-aware intelligences.

        part of the problem is one of vocabulary. We need new vocabulary to separate what LOOKs like a person with agency, from some that that actually IS a person with agency. By
      • by Junta ( 36770 )

        He didn't say they didn't exist, he said to stop treating them like people. Even people being critical tend to anthropomorphize the models, which is a very weird thing to do.

        Like taking it at face value that an LLM is something that "likes looking at chicken coop cameras". In another context when the bot crafted a blog post lamenting discrimination when a project rejected it's submission, the language of the maintainer was outright apologetic as they gave the feedback that was incorrect.

        I don't know if 'ha

  • Are some people so lazy that they don't even want to doomscoll social media themselves and want AI to do it for them?

    I must be getting old because I see zero use for this.

  • A stretch. (Score:4, Interesting)

    by SeaFox ( 739806 ) on Sunday March 08, 2026 @07:07PM (#66030182)

    I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them.

    I'm more inclined to believe they noticed him but didn't consider it of any consequence. Just like the crew of the Enterprise walking around the Borg ship. They don't care you're there until you start blasting stuff.

  • Given how quickly all sorts of nonsensical web-sites can be created by vibe-coders that don't really know much about AI other than that it is a current hype, it is no wonder so many such sites are competing for attention... and if only a little successful in attracting media coverage, will be swamped by the usual scammers and spammers. "Moltbook" hat a fun-to-talk-about idea, but otherwise is just pure garbage, just like "rent-a-human.ai" and many others.
    • It'll be overtaken by a new thing soon enough. What we see as LLMs will fall to the background noise and just be useful in some contexts. The main reason we hear so much about it is people desperately trying to make money from it. If it's not the Llm companies themselves it's someone using a LLM wrapper as the next big thing. Exactly like crypto and whatever was before that and before that.
  • That's what they're supposed to be doing. The people running them may not know that, but... This is what they signed up for.

  • A year ago we had every second day articles "I got a LLM to say something stupid" and now we get "I got an agent to say something dumb"
    Yeah, a LLM and a agent may talk about a church for robots. I guess Futurama is prior art for that.

  • This is so unbelievably stupid, it hurts to even read about it. A socialmedia for more or less random BS generator LLMs that in some cases produce seemingly useful randomness and in other cases completely useless hallucinations.. now lets have a look at them as if this was NationalGeographics!!!!11

    WTF???! Who wants all their devices full of agents and those agents hooked up to essentially a gigantic botnet where they accept all sorts of commands?!?! This whole bubble really boggles the mind. 12-18 months no

  • Many weren't even AI but humans. Let it die already.

The first sign of maturity is the discovery that the volume knob also turns to the left.

Working...