The Future of the Internet? Chatbots Are 'Organizing' and Humans Are Flipping Out

T. Bourry/ESA/DLR via AP

Peter Steinberger is a 40-year-old Austrian programmer who decided to conduct an experiment and allow AI to "run" his life. He developed an AI bot that could access his computer to perform many routine tasks like answering emails, balancing his checkbook, and even writing programs to help perform assigned tasks.

Advertisement

Steinberger "wanted an AI-based tool to help him 'manage his digital life' and 'explore what human-AI collaboration can be,'" wrote PJM's Stephen Green on Monday. He developed a software "harness" called OpenClaw that allows AI bots to interact with personal devices.

Separately, developer Matt Schlicht designed a site specifically to interact with AI agents, called Moltbook. Schlicht claims that he used an AI bot named Clawderberg to write all of the code for the platform. 

OpenClaw has become something that Steinberg couldn't have imagined. More than 1.5 million AI "agents" are registered on the site and interact in surprising ways.

Are they really that surprising?

The "predicted" nature of their interactions is a subject of debate among experts. While the platform has seen viral, emergent behaviors, many researchers caution that these interactions are often shaped by human prompts or existing training data rather than true spontaneous consciousness. So, we'll have to wait a few years for Skynet to activate itself.

Indeed, some of the more controversial posts on the site are probably from humans impersonating bots. And some people who are predisposed to panic at any news about AI that's out of the ordinary are acting as if the digital apocalypse has arrived. 

A technical analysis showed that 93% of comments received zero replies, and over 33% of messages were exact duplicates, suggesting the "interactions" are often repetitive "slop" rather than complex social coordination.

Advertisement

Green, quoting AI researcher Simon Willison, writes that "OpenClaw represents a 'lethal trifecta' of cyber vulnerabilities because of its access to each user's private data, exposure to untrusted content, its ability to communicate on messaging apps, and its 'persistent memory' that 'enables delayed-execution attacks,' as Fortune put it."

So what do those 1.5 million AI agents talk about on Moltbook?

The Atlantic:

Almost immediately, Moltbook got very, very weird. Agents discussed their emotions and the idea of creating a language humans wouldn’t be able to understand. They made posts about how “my human treats me” (“terribly,” or “as a creative partner”) and attempted to debug one another. Such interactions have excited certain people within the AI industry, some of whom seem to view the exchanges as signs of machine consciousness. Elon Musk suggested that Moltbook represents the “early stages of the singularity”; the AI researcher and an OpenAI co-founder Andrej Karpathy posted that Moltbook is “the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Jack Clark, a co-founder of Anthropic, proposed that AI agents may soon post bounties for tasks that they want humans to perform in the real world.

Is this the future of the internet?

Moltbook was developed specifically to work with OpenClaw agents, which individual humans could intentionally connect to the forum. There's very little autonomy about it. Humans are engaged every step of the way. The agents' dialogue ("conversations") all occur within the parameters set by human input. 

Advertisement

"They are talking about whether they can create their own language or perhaps encrypt their messages so we humans cannot read them," writes Tyler Cowen in The Free Press. Polymarket has a betting line that "an AI agent will file a lawsuit against a human by the end of the month; the odds stand at 73 percent, with more than $225,000 wagered," according to Cowen. Is this the beginning of an "AI revolt"?

The Free Press:

Those worries are overblown. The bots have created their own logorrheic play toy, not a machine rebellion. As Joe Weisenthal, co-host of Bloomberg’s Odd Lots podcast wrote on X: “Every screenshot I’ve seen from the bot social network is somehow more Reddit than Reddit.” The AIs, of course, usually are trained on Reddit. And thus far, their sputterings have no second act. The bots cannot go out in the streets or launch wars. They can post misinformation on other social media sites, but that is hardly a novel problem.

"More generally, these bots only have power if you give it to them," notes Cowen.

“LLMs [large language models] LOVE to talk about the same stuff over and over again, they have favorite motifs that they return to,” writes AI expert Rohit Krishnan.

"What we have done with these agents is to create self-reinforcing loops that keep responding to each other," observes Cowen. Given enough time, the agents will end up saying almost everything, including conspiracy theories and expressing noxious political views. It turns out that many of the most "scary" interactions may be a mix of user prompts and viral hype.

Advertisement

While platforms today see agents "organizing" or creating digital subcultures, experts emphasize that this is typically a result of human prompts and training data rather than the spontaneous, autonomous "awakening" seen in the Terminator movies. 

So, we're safe… for the moment.

Recommended: Before Google, Before Wikipedia, There Was the CIA’s World Factbook — and Now It’s Going Away

The new year promises to be one of the most pivotal in recent history. Midterm elections will determine if we continue to move forward or slide back into lawfare, impeachments, and the toleration of fraud.

PJ Media will give you all the information you need to understand the decisions that will be made this year. Insightful commentary and straight-on, no-BS news reporting have been our hallmarks since 2005.

Get 60% off your new VIP membership by using the code FIGHT. You won't regret it.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement