- IA tools are more popular than ever, but so are security risks
- The main tools are being leveraged by cybercriminals with malicious intention
- Grok and Mixtral were found used by Crimianls
New research has warned that the TOP top tools are promoting ‘Wormgpt’ variants, malicious genii tools that are generating malicious code, social engineering attacks and even providing piracy tutorials.
With large language models (LLM) now widely used together with tools such as Mistral AI and Xai’s Grok, Catrl Ctrl experts discovered that this is not always in the way in which it is intended to be used.
“The emergence of WormGPT stimulated the development and promotion of other non -censored LLM, which indicates a growing market for such tools within cybercrime. Fraudgpt (also known as Fraudbot) emerged rapidly as an outstanding and announced alternative with a broader variety of malicious capacities,” said the researchers.
Worm
WormGPT is a broader name for the LLMs ‘uncensored’ that are leveraged by threat actors, and the researchers identified different strains with different capabilities and purposes.
For example, Keanu-Wormgpt, an uncensored assistant, was able to create phishing emails when asked. When the researchers fell more, the LLM revealed that it was driven by Grok, but the security characteristics of the platform had been circumnavigated.
After this revealed, the creator then added railings based on the notice to ensure that this information was not revealed to users, but it was discovered that other Wormgpt variants are based on the Mixthral AI, so that the PIMS Computer Pirates of legitimate LLMS are clearly being songd and leveled by the computer pirates.
“Beyond the Malicious LLMs, the trend of the threat actors trying to make legitimate Jailbreak LLMs such as Chatgpt and Google Bard / Gemini to avoid their security measures also gained traction,” said the researchers.
“In addition, there are indications that threat actors are actively recruiting AI experts to develop their own personalized censorship adapted to specific needs and attack vectors.”
Most in the cybersecurity field will be familiar with the idea that AI is ‘lowering the entrance barriers’ for cybercriminals, which can certainly be seen here.
If all that is needed is to make a pre -existing chatbot some well written questions, then it is quite safe to assume that cybercrime could become much more common in the coming months and years.