Malicious LLMs allow even unskilled hackers to create dangerous new malware



  • Hackers use untethered LLMs like WormGPT 4 and KawaiiGPT for cybercrime
  • WormGPT 4 enables encryptors, exfiltration tools and ransom notes; KawaiiGPT creates phishing scripts
  • Both models have hundreds of Telegram subscribers, which reduces the barriers to entry for cybercrime

Most generative AI tools in use today are not free of restrictions (for example, they are not allowed to teach people how to make bombs or how to commit suicide) and they are not allowed to facilitate cybercrime.

While some hackers attempt to jailbreak tools by bypassing those security barriers with smart prompts, others simply build their own completely independent large language models (LLMs) for use exclusively in cybercrime.



Leave a Comment

Your email address will not be published. Required fields are marked *