
Reports have surfaced that OpenAI is preparing to launch a premium AI agent product priced at around 30 million won (approximately $20,633) monthly. This signaled the rapid approach of the AI agent era. However, this development has raised significant concerns regarding privacy and data security.
AI agents require access to a broader range of personal information and permissions to operate applications autonomously. Experts warn that this could pose serious security risks, including potential hacker attacks, AI-specific errors, and system mistakes. They emphasize the need for careful management and oversight to mitigate these dangers.

At the SXSW conference in Austin, Texas, on Friday, Meredith Whittaker, president of the Signal Foundation (a U.S.-based secure messaging company) and former Google employee, raised alarms about the privacy implications of AI agents. Whittaker described using an AI agent as “like putting your brain in a jar,” adding that this emerging computing paradigm could lead to serious privacy and security issues.
She further explained that the AI agents are being marketed as “magic genie bots” that can anticipate needs and complete tasks without user requests. She warned that this could blur the lines between operating systems (OS) and apps, leading to the mixing of personal data and privacy violations.
AI agents go beyond traditional chatbots. They function as intelligent systems capable of solving problems independently. Based on user requests, they interact with their environment and work toward achieving specific goals.
For example, an AI agent will automatically check attendees’ calendars and reserve a conference room when asked to schedule a meeting. The AI agent will autonomously handle these tasks if you request services like flight bookings, hotel reservations, or an Uber ride through voice or text. In case of any issues, the AI will try to resolve them independently and contact the user if a solution cannot be found.
The problem lies in the level of control required to perform such tasks. Concerns have been raised over the access permissions needed by AI agents, including logging into web browsers, accessing messaging apps, processing credit card details for ticket payments, and accessing personal and team calendars.
Whittaker pointed out that given the computing power necessary to run AI agents, much of the data will likely be processed through cloud-based servers rather than personal devices. She further emphasized that this method creates a greater risk of security breaches as sensitive data travels between cloud servers.

In related developments, global security experts Bruce Schneier, a public policy professor at Harvard Kennedy School and a researcher at the Berkman Klein Center, and Andrew Ng, an associate professor at Stanford University (currently serving on Amazon’s board and a former co-leader of Google Brain), have warned of unprecedented threats posed by AI technology.
Schneier stated that AI currently lacks the common sense humans possess and added that if AI is applied to critical decision-making processes, it could lead to more systematic and potentially catastrophic mistakes.
Ng expressed concerns about AI’s ability to make biased decisions and its potential to be used in socially harmful ways, which could lead to various issues, including social and economic disruption.