
A recent analysis by OpenAI of ChatGPT misuse cases detected and blocked over the past three months revealed that 4 out of 10 incidents involved accounts linked to China.
Accounts suspected to be associated with North Korean IT personnel were found to have created fake resumes using ChatGPT in an attempt to secure jobs with U.S. and European companies.
On Wednesday, according to IT industry sources and international media reports, OpenAI blocked accounts believed to be connected to China, North Korea, and Russia that were engaged in information manipulation and cyberattack activities using ChatGPT.
OpenAI released its report of “Disrupting malicious uses of AI: June 2025,” detailing 10 cases of malicious AI use. The report warns of various cyber threats, including social engineering, cyberespionage, deceptive corporate infiltration, and covert influence operations.

The report indicated that groups suspected of being linked to North Korea utilized ChatGPT to generate fake resumes, profiles, and cover letters. They actively employed the AI tool to create profiles, disguising themselves as business social media personnel.
When companies requested video interviews, these individuals presented the faces of account representatives and suggested voice calls and remote interviews, citing technical issues such as communication failures as reasons. They used ChatGPT to answer interview questions.
Accounts believed to be connected to China generated comments in English, Chinese, and Urdu on platforms such as TikTok, X (formerly Twitter), Reddit, and Facebook.
OpenAI named these activities “Uncle Spam” and “Sneer Review.” The accounts used ChatGPT to write pro and con comments about the closure of the U.S. Agency for International Development (USAID), to sensationalize criticism of the Chinese Communist Party in Taiwan, and to draft reports on the effectiveness of propaganda campaigns.

Russian-language accounts repeatedly generated and distributed malicious Windows code using ChatGPT as a coding assistant. Some accounts generated AI-generated content criticizing the U.S. and NATO, while also supporting the Alternative für Deutschland (AfD) party, and disseminated this content on Telegram and X, aiming to interfere in this year’s German elections.
In its report, OpenAI stated that the primary goal of detecting and blocking malicious activities is to prevent authoritarian governments from using AI tools to consolidate power or control citizens.
The report stated that its proprietary AI model, Force Multiplier, effectively detects and blocks malicious activities, including social engineering attacks, cyberespionage, deceptive hiring schemes, covert influence operations, and scams. It emphasized that its mission is to build democratic AI, ensuring that AGI (Artificial General Intelligence) benefits all of humanity.