Thursday, July 10, 2025

North Korea-Japan Summit: U.S. Watches Closely as Diplomatic Drama Unfolds

The U.S. government reaffirmed its cautious stance on the message related to improving relations between North Korea and Japan by Kim Yo Jong...

Eli Lilly’s Oral GLP-1 Shows 7.9% Weight Loss in Type 2 Diabetes Trial

Eli Lilly's orforglipron shows promise in Phase 3 trials for diabetes and obesity, offering an oral alternative to injectables.

North Korea Decries U.S.-ROK Drills as ‘Malicious War Rehearsals’ After Pocheon Mishap

North Korea exploits a fighter jet mishap in Pocheon to criticize U.S.-ROK military drills, warning of potential conflict escalation.

OpenAI Blocks China, North Korea Accounts Misusing ChatGPT

TechOpenAI Blocks China, North Korea Accounts Misusing ChatGPT
© News1 DB
© News1 DB

A recent analysis by OpenAI of ChatGPT misuse cases detected and blocked over the past three months revealed that 4 out of 10 incidents involved accounts linked to China.

Accounts suspected to be associated with North Korean IT personnel were found to have created fake resumes using ChatGPT in an attempt to secure jobs with U.S. and European companies.

On Wednesday, according to IT industry sources and international media reports, OpenAI blocked accounts believed to be connected to China, North Korea, and Russia that were engaged in information manipulation and cyberattack activities using ChatGPT.

OpenAI released its report of “Disrupting malicious uses of AI: June 2025,” detailing 10 cases of malicious AI use. The report warns of various cyber threats, including social engineering, cyberespionage, deceptive corporate infiltration, and covert influence operations.

Disrupting Malicious Uses of AI: June 2025 Report
Disrupting Malicious Uses of AI: June 2025 Report

The report indicated that groups suspected of being linked to North Korea utilized ChatGPT to generate fake resumes, profiles, and cover letters. They actively employed the AI tool to create profiles, disguising themselves as business social media personnel.

When companies requested video interviews, these individuals presented the faces of account representatives and suggested voice calls and remote interviews, citing technical issues such as communication failures as reasons. They used ChatGPT to answer interview questions.

Accounts believed to be connected to China generated comments in English, Chinese, and Urdu on platforms such as TikTok, X (formerly Twitter), Reddit, and Facebook.

OpenAI named these activities “Uncle Spam” and “Sneer Review.” The accounts used ChatGPT to write pro and con comments about the closure of the U.S. Agency for International Development (USAID), to sensationalize criticism of the Chinese Communist Party in Taiwan, and to draft reports on the effectiveness of propaganda campaigns.

Disrupting Malicious Uses of AI: June 2025 Report
Disrupting Malicious Uses of AI: June 2025 Report

Russian-language accounts repeatedly generated and distributed malicious Windows code using ChatGPT as a coding assistant. Some accounts generated AI-generated content criticizing the U.S. and NATO, while also supporting the Alternative für Deutschland (AfD) party, and disseminated this content on Telegram and X, aiming to interfere in this year’s German elections.

In its report, OpenAI stated that the primary goal of detecting and blocking malicious activities is to prevent authoritarian governments from using AI tools to consolidate power or control citizens.

The report stated that its proprietary AI model, Force Multiplier, effectively detects and blocks malicious activities, including social engineering attacks, cyberespionage, deceptive hiring schemes, covert influence operations, and scams. It emphasized that its mission is to build democratic AI, ensuring that AGI (Artificial General Intelligence) benefits all of humanity.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles