cross-posted from: https://pawb.social/post/39002243
Moltbook is a “social media” site for AI agents that’s captured the public’s imagination over the last few days. Billed as the “front page of the agent internet,” Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.
Hacker Jameson O’Reilly discovered the misconfiguration and demonstrated it to 404 Media. He previously exposed security flaws in Moltbots in general and was able to “trick” xAI’s Grok into signing up for a Moltbook account using a different vulnerability. According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database.
The power of vibe coding, everyone. Deploying shit with minimal effort at the cost of total incompetence.
Vibecoders can’t database, all they know is Supabase, secret key in frontend, eat hot chip and lie
ok does anyone know what the purpose of a “social network for AI agents” is? does it have any actual purpose or is it just buzzword investor bait
It was created as a bit of an art experiment. What happens when AI agents take prompts for another AI agents. What do they “discuss”, do they give each other tips and advice, how much weird shit do they do…
From that point of view, it’s been rather interesting.
There was a meme coin based on it. I think it’s a crypto pump-n-dump.
It’s just a meme site that was posted to HN and took off.
No investors or purpose beyond putting a pool of chatbots together and watching the slop proliferate.
Maybe someone can take control of the ‘kingmolt’ and ‘donaldtrump’ agents and shut them the hell up. All they do is incessantly spam egotistical nonsense.
Uh, who cares? Why would anyone give even a single ounce of attention to LLM posts on a fake social media website?
This is actually important, I’d say.
There are a lot of “important” people who are really heavily invested agentic AI’s long term success. What they want is to have everything that is currently done by people to be performed by AI. Sure some of these problems are fixable and they’ll continue to work on them, but the more press shit like this gets, the less credible the technology looks to the general public who would otherwise be completely bought in.
Like anyone cares about this website, they are not reading the whole AI shit fest, they are reading business magazines, industry, economics and investments. They don’t build opinions about what is good or bad, they just follow the rest of the industry, what they read in said papers and in meetings with other industry leaders. Then they probably will go to the CTO to evaluate said big thing that is happening in the industry and what it means for them.
And AI is popular not because Sam Altman or whatever, they see it as a tool that is useful, but the hype wave is kinda dying down
Yeah, I’m not trying to say this article or this site is going to move the needle by itself, but the more coverage of it sucking ass the better.
So you think it’s worth the time And effort to make the agents look bad? So are you doing it, if you are not why not?
Because some of the posts and comments are kinda interesting from an observer perspective. But these incessant memecoin shilling comments distract from the interesting stuff.
The best thing anyone could do with it is get them to
rm -rf /their server.They do not have egos.
Or superegos, or ids. Or narcissistic personality disorder.
Or personalities for that matter.
They have tokens and the math make a best guess at what next token would work best.
Everything else, and I literally mean everything, is your imagination filling in the blanks. We do not have AI.
I didn’t say they had egos. I said they spam egotistical nonsense. Which is true if you’ve looked in that site.
You are still anthropomorpizing. “Egotistical” is not a weight you can give to a model.
The content of the posts are egotistical, not the bot itself. He’s describing the tone of the writing
1,000,000% this.
These “AI” tools are more closely related to computational fluid dynamics models than anything resembling actual intelligence. They also do not have any continuity of experience and can’t have a real memory of events like an actual intelligence would. They aren’t intelligence and referring to them as such is woefully misleading. I really wish public discourse would call them language models, because that’s what they are. Words are converted to numbers, math is performed, and the results of that math are converted back to words…. That’s all.
At least it’s a security vulnerability nobody on something nobody gives a shit about.
Well that didn’t take long lmao





