Hackers are using GPT -4 to build a virtual assistant; This is what we know




  • Malterinal USA GPT-4 to generate ransomware or shell code in reverse execution time
  • Malware enabled for LLM evades detection by creating malicious logic only during execution
  • The researchers did not find evidence of deployment; Probably a proof of concept or test tool

Sentinelone’s cybersecurity researchers have discovered a new piece of malware used by OpenAi Chatgpt-4 to generate malicious code in real time.

The researchers affirm that malterinal represents a significant change in how threat actors create and implement malicious code, pointing out: “The incorporation of LLM in malware marks a qualitative change in adversary crafts.”

Leave a Comment

Your email address will not be published. Required fields are marked *