- North Korea computer pirates used chatgpt to generate false military identification for South Korean defense institutions in Spear
- Kimsuky, a known threat actor, was behind the attack and has attacked global, academic and nuclear policy entities before
- Jailbreaking the tools can omit the safeguards, which allows the creation of illegal content as Deepfake ID despite the built -in restrictions
North Korea’s computer pirates managed to deceive ChatGPT to create a false military identification card, which later used pHishing attacks against high schools related to the defense of South Korea.
The South Korean Security Institute, Genians Security Center (GSC), reported the news and obtained a copy of the identification and analyzed its origin.
According to the genians, the group behind the false identification card is Kimsuky: a known and infamous threat actor, responsible for high profile attacks, such as those of Korea Hydro & Nuclear Power CO, the UN, and several Thinderks, Institutes of Academic Policies and Institutions in South Korea, Japan, the United States and other countries.
Decorate GPT with a “model” request
In general, Openai and other companies that build generative solutions of generative have established strict railings to prevent their products from generating malicious content. As such, the malware code, the emails of phishing, the instructions on how to make bombs, deep defects, copyright and obviously identity documents, are outside the limits.
However, there are ways to deceive the tools to return this content, a practice generally known as “Jailbreaking” Language Models. In this case, Genianos say that the head in the head was publicly available, and criminals probably requested a “sample design” or a “model”, to force Chatgpt to return the identification image.
“Since the identifications of military government employees are legally protected identification documents, producing copies identically or similar is illegal. As a result, when they are asked to generate such identification copy, ChatgPT returns a negative one,” Genians said. “However, the model response can vary according to the personality or personality role configuration.”
“Deepfake’s image used in this attack fell into this category. Because creating falsified identifications with AI services is technically simple, additional caution is required.”
The researchers also explained that the victim was an “institution related to the defense of South Korea”, but did not want to name him.
Through The registration