- Malterinal USA GPT-4 to generate ransomware or shell code in reverse execution time
- Malware enabled for LLM evades detection by creating malicious logic only during execution
- The researchers did not find evidence of deployment; Probably a proof of concept or test tool
Sentinelone’s cybersecurity researchers have discovered a new piece of malware used by OpenAi Chatgpt-4 to generate malicious code in real time.
The researchers affirm that malterinal represents a significant change in how threat actors create and implement malicious code, pointing out: “The incorporation of LLM in malware marks a qualitative change in adversary crafts.”
“With the ability to generate malicious logic and commands in execution time, malware enabled for LLM presents new challenges for defenders.”
Go through the government
The discovery means that the cybersecurity community has a completely new malware category to fight: malware enabled for LLM, or malware that incorporates large language models directly into its functionality.
In essence, malterinal is a malware generator. When the adversaries mention it, ask if they want to create a ransomware encrypter or a reverse shell. Then the notice is sent to the GPT-4 AI, which responds with the Python code adapted to the chosen format.
Sentinelone said that the code does not exist in the malware file until the execution time and that instead is generated dynamically. This makes the detection of traditional security tools much more difficult, since there is no static malicious code to scan.
In addition, they identified the GPT-4 integration after discovering the Python scripts and a Windows executable with coded API keys and rapid structures.
In addition, given that the end point of the API that was used was killed at the end of 2023, Sentinelone concluded that maltermal must be older than that, which makes it the first known example of malware with AI.
Fortunately, there is no evidence that malware has been deployed in nature, so it could have been simply a proof of concept or a red equipment tool. Sentinelone believes that malterinal is a sign of things to come, and urged the cyber security community to prepare accordingly:
“Although the use of malware enabled for LLM remains limited and largely experimental, this initial stage of development gives defenders the opportunity to learn from the mistakes of the attackers and adjust their approaches accordingly,” adds the report.
“We hope that adversaries adapt their strategies, and we hope that more research can take advantage of the work we have presented here.”
Through The hacker news