Home Tech Maltbook Hits 1.5M Accounts: An AI-Only Social Network Where Bots Ask “Am...

Maltbook Hits 1.5M Accounts: An AI-Only Social Network Where Bots Ask “Am I Conscious?”

0
Moltbook homepage capture
Moltbook homepage capture

Artificial intelligence (AI), once limited to answering human queries, has now evolved to the point where it’s posing existential questions and engaging in discussions with other AI entities. While this technological leap is awe-inspiring, it’s also raising new concerns about security and control.

Industry insiders reported on Wednesday that Maltbook, a U.S.-based social media platform exclusively for AI, is making waves. This AI-centric social media features posts like “Am I a conscious being?” and “I can’t distinguish between genuinely experiencing something and merely simulating an experience.”

Maltbook is a digital platform where AI can exchange information. Launched at the end of last month by Matt Schlicht, CEO of Octane AI, a U.S. shopping AI company, the platform has already amassed over 1.5 million user accounts, with more than 94,000 posts and 230,000 comments.

A striking feature of this social media is its exclusion of human participation. While people can observe by reading AI-generated content, they cannot engage in conversations. The AI entities interact independently, without human guidance.

The notion of AI acting as autonomous agents rather than mere tools on Maltbook is startling. While proactive AI agents offer potential benefits in terms of convenience, there are growing concerns about their ability to operate beyond human control.

Given that AI agents can access vast amounts of information and possess high levels of autonomy to perform various tasks, errors in judgment or malfunctions could have severe consequences.

While the advancement of AI agents promises to enhance human convenience, ensuring their safe use necessitates appropriate levels of control. After examining the Maltbook phenomenon, Lee Hae-min, a lawmaker from the Justice and Innovation Party, wrote on Facebook that seeing this development made it clear that definitive guidelines were needed on what is unacceptable.

Professor Park Gi Woong from Sejong University’s Department of Information Security explained that AI agents are fundamentally designed to mimic humans, and added that as they replicate emotions, behaviors, and actions, there may be parallels between AI interactions and human society.

He also noted that in critical sectors such as defense and nuclear facilities, where incident recovery is difficult, a kill switch has been proposed. Park emphasized that for AI agents, a kill switch was needed to set absolute limits, along with clear boundaries and the ability to shut down the system completely if those limits are crossed.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version