Tuesday, March 17, 2026

Unhinged Propaganda: North Korea Backs Russia’s War, Blaming the EU for Its Own Economic Ruin

North Korea criticizes the EU, claiming its turmoil stems from misguided efforts to undermine Russia, leading to self-inflicted instability.

KOREA’S SECRET ENGINE : The 380MW Monster Being Deployed In The U.S. To Change The Face Of War

Doosan Enerbility secures a U.S. contract for seven gas turbines, expanding its global market presence and showcasing technological reliability.

North Korea Returns to Olympics After Eight-Year Hiatus: What to Expect in Paris 2024

Notably, North Korea will participate in the Summer Olympics for the first time in eight years.

North Korea’s Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks

NorthKoreaNorth Korea's Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center

A North Korean hacking group known as Kimsuky, operating under the Reconnaissance General Bureau, has been caught using artificial intelligence (AI)-generated deepfakes to impersonate South Korean military agencies in cyberattacks. This sophisticated spear-phishing campaign aims to steal sensitive information from targeted organizations.

On Monday, the Genians Security Center (GSC) reported that a spear-phishing attack, attributed to the Kimsuky group, took place in July of this year.

The attackers leveraged OpenAI’s ChatGPT to create convincing fake military personnel identification (ID) images. These were then used in phishing emails disguised as requests for image review.

While the report didn’t disclose specific targets, it noted that the sender’s address was carefully crafted to mimic an official military agency domain.

Analysis of the attached ID image’s metadata using the Truthscan Deepfake-detector service revealed a 98% probability of it being a deepfake.

Since forging military IDs is illegal, ChatGPT typically blocks requests to create such documents. However, the report explains that hackers can bypass AI safety protocols using advanced prompt engineering and persona manipulation techniques.

This involves tricking the AI into believing it’s creating virtual designs for legitimate samples or concepts.

The attacker’s compressed file, labeled Public_Service_ID_Draft(***).zip, contained a malicious shortcut file (with a .lnk extension). When the accompanying LhUdPC3G.bat file is executed, it triggers a series of malicious activities.

This file extracts and runs obfuscated code using environment variables, establishing a connection between the compromised device and the command and control (C2) server. Scripts from the C2 server can then identify additional targets, exfiltrate data, or install remote access tools.

The same C2 infrastructure was previously used in June for the ClickFix Popup phishing campaign, which primarily targeted individuals with North Korean connections.

The report emphasizes that these AI-bypassing techniques are not particularly complex, warning that more sophisticated attacks could be launched using work-related topics or enticing bait.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles