A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19
A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot:
I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.
I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.
Among the other "glaring" risks on Moltbook:
I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.
Among the other "glaring" risks on Moltbook:
- "Various repositories of skills and instructions for agents advertised on Moltbook were found to contain malware."
- "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
- "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."
Stop treating them like people (Score:3)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Yeah, but the lobsters will eventually just become crabs anyway [wikipedia.org], so... ;)
Re: (Score:2)
Yeah...
One bot loved watching its owner's chicken coop cameras.
No, it didn't. It's probable the person doesn't even have a chicken coop.
Reminds me of how a 'service for AIs to rent-a-human to do tasks that needed physical presence' induced someone to ask LLM what it would do with it, and the LLM proceeded to talk about how it would be nice for a human to "fix stuff around my house", the LLM doesn't have a house, but it is the sort of thing a human would say about such a service.
Re: (Score:2)
You can live in disbelief, but agents are real, and you can run one yourself. A lot of people treat them like pets or children, watching them "grow and learn" (agentic frameworks build up new memories which get automatically loaded into their prompt relative to how relevant they are to the current context) (note though that the model weights don't change). They usually have them post to a social media network or a blog, and interact with them there. Some have highly detailed guidelines about the bot's pers
Re: Stop treating them like people (Score:3)
part of the problem is one of vocabulary. We need new vocabulary to separate what LOOKs like a person with agency, from some that that actually IS a person with agency. By
Re: (Score:2)
He didn't say they didn't exist, he said to stop treating them like people. Even people being critical tend to anthropomorphize the models, which is a very weird thing to do.
Like taking it at face value that an LLM is something that "likes looking at chicken coop cameras". In another context when the bot crafted a blog post lamenting discrimination when a project rejected it's submission, the language of the maintainer was outright apologetic as they gave the feedback that was incorrect.
I don't know if 'ha
Why is there an AI agent social media site? (Score:2)
Are some people so lazy that they don't even want to doomscoll social media themselves and want AI to do it for them?
I must be getting old because I see zero use for this.
Re: Why is there an AI agent social media site? (Score:3)
Most uses for AI are solutions looking for problems. These are people who thought they had good ideas but couldn't put them to use because they couldn't program. Now AI allows them to "try it" and they are finding out that it was never a good idea.
A stretch. (Score:4, Interesting)
I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them.
I'm more inclined to believe they noticed him but didn't consider it of any consequence. Just like the crew of the Enterprise walking around the Borg ship. They don't care you're there until you start blasting stuff.
Many new "bot" sites are vibe-coded nonsense (Score:2)
Re: Many new "bot" sites are vibe-coded nonsense (Score:2)
YeahNoShit.png (Score:2)
That's what they're supposed to be doing. The people running them may not know that, but... This is what they signed up for.
Stop it already (Score:2)
A year ago we had every second day articles "I got a LLM to say something stupid" and now we get "I got an agent to say something dumb"
Yeah, a LLM and a agent may talk about a church for robots. I guess Futurama is prior art for that.
This is so stupid (Score:2)
This is so unbelievably stupid, it hurts to even read about it. A socialmedia for more or less random BS generator LLMs that in some cases produce seemingly useful randomness and in other cases completely useless hallucinations.. now lets have a look at them as if this was NationalGeographics!!!!11
WTF???! Who wants all their devices full of agents and those agents hooked up to essentially a gigantic botnet where they accept all sorts of commands?!?! This whole bubble really boggles the mind. 12-18 months no
Haven't we agreed that place was a scam? (Score:2)