Tuesday, March 17, 2026

North Korea Threatens Fierce Response as U.S., South Korea, Japan Launch Joint Drills

North Korea reacts aggressively to the U.S.-South Korea-Japan military exercises, threatening to enhance its nuclear capabilities.

31 Years After His Death, Kim Il Sung Still Central to North Korea’s Identity

Kim Jong-un commemorated the 31st anniversary of Kim Il-sung's death by visiting the Kumsusan Palace of the Sun.

US Parents File Lawsuit Against OpenAI, Prompting ChatGPT to Add Teen Safety Features

OpenAI plans a youth version of ChatGPT with parental controls to protect minors, following concerns over AI's impact on young users.

North Korea’s Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks

NorthKoreaNorth Korea's Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center

A North Korean hacking group known as Kimsuky, operating under the Reconnaissance General Bureau, has been caught using artificial intelligence (AI)-generated deepfakes to impersonate South Korean military agencies in cyberattacks. This sophisticated spear-phishing campaign aims to steal sensitive information from targeted organizations.

On Monday, the Genians Security Center (GSC) reported that a spear-phishing attack, attributed to the Kimsuky group, took place in July of this year.

The attackers leveraged OpenAI’s ChatGPT to create convincing fake military personnel identification (ID) images. These were then used in phishing emails disguised as requests for image review.

While the report didn’t disclose specific targets, it noted that the sender’s address was carefully crafted to mimic an official military agency domain.

Analysis of the attached ID image’s metadata using the Truthscan Deepfake-detector service revealed a 98% probability of it being a deepfake.

Since forging military IDs is illegal, ChatGPT typically blocks requests to create such documents. However, the report explains that hackers can bypass AI safety protocols using advanced prompt engineering and persona manipulation techniques.

This involves tricking the AI into believing it’s creating virtual designs for legitimate samples or concepts.

The attacker’s compressed file, labeled Public_Service_ID_Draft(***).zip, contained a malicious shortcut file (with a .lnk extension). When the accompanying LhUdPC3G.bat file is executed, it triggers a series of malicious activities.

This file extracts and runs obfuscated code using environment variables, establishing a connection between the compromised device and the command and control (C2) server. Scripts from the C2 server can then identify additional targets, exfiltrate data, or install remote access tools.

The same C2 infrastructure was previously used in June for the ClickFix Popup phishing campaign, which primarily targeted individuals with North Korean connections.

The report emphasizes that these AI-bypassing techniques are not particularly complex, warning that more sophisticated attacks could be launched using work-related topics or enticing bait.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles