Tuesday, March 17, 2026

Is Korea Facing Higher Tariffs? Insights from U.S. Trade Talks on Non-Tariff Barriers

U.S. Trade Representative warns of potential tariff increases on South Korea unless non-tariff barriers improve, says Foreign Minister Cho Hyun.

Mystery Trip: North Korea’s Safety Officials Fly to Russia Amid Speculation

A North Korean delegation led by Bang Du-seop has departed Pyongyang for a visit to Russia, as reported by the Workers' Party newspaper.

Is the Mono Diet a Quick Fix or a Health Risk? Experts Weigh In

The mono diet trend on YouTube claims weight loss by eating one food daily, but experts warn it's unsustainable and potentially harmful.

North Korea’s Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks

NorthKoreaNorth Korea's Kim Sooki Hackers Use AI Deepfakes to Target Military with Sneaky Phishing Attacks
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center
A phishing email (left) impersonating a South Korean military agency, believed to be from the Kimsooki group, and the deepfake detection result of a fake military personnel ID card design attached / Provided by Genians Security Center

A North Korean hacking group known as Kimsuky, operating under the Reconnaissance General Bureau, has been caught using artificial intelligence (AI)-generated deepfakes to impersonate South Korean military agencies in cyberattacks. This sophisticated spear-phishing campaign aims to steal sensitive information from targeted organizations.

On Monday, the Genians Security Center (GSC) reported that a spear-phishing attack, attributed to the Kimsuky group, took place in July of this year.

The attackers leveraged OpenAI’s ChatGPT to create convincing fake military personnel identification (ID) images. These were then used in phishing emails disguised as requests for image review.

While the report didn’t disclose specific targets, it noted that the sender’s address was carefully crafted to mimic an official military agency domain.

Analysis of the attached ID image’s metadata using the Truthscan Deepfake-detector service revealed a 98% probability of it being a deepfake.

Since forging military IDs is illegal, ChatGPT typically blocks requests to create such documents. However, the report explains that hackers can bypass AI safety protocols using advanced prompt engineering and persona manipulation techniques.

This involves tricking the AI into believing it’s creating virtual designs for legitimate samples or concepts.

The attacker’s compressed file, labeled Public_Service_ID_Draft(***).zip, contained a malicious shortcut file (with a .lnk extension). When the accompanying LhUdPC3G.bat file is executed, it triggers a series of malicious activities.

This file extracts and runs obfuscated code using environment variables, establishing a connection between the compromised device and the command and control (C2) server. Scripts from the C2 server can then identify additional targets, exfiltrate data, or install remote access tools.

The same C2 infrastructure was previously used in June for the ClickFix Popup phishing campaign, which primarily targeted individuals with North Korean connections.

The report emphasizes that these AI-bypassing techniques are not particularly complex, warning that more sophisticated attacks could be launched using work-related topics or enticing bait.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles