- Google publishes a new report details how criminals are abusing Gemini
- Iran, North Korea, Russia and other places were mentioned.
- Computer pirates are experiencing, but they have not yet found “novel capabilities”
Dozens of cyber organizations around the world are abusing the artificial intelligence solution (AI) of Google Gemini in their attacks, the company has admitted.
In an in -depth analysis that discusses who the threat actors are and what they are using the tools, the Google threat intelligence group highlighted how the platform has not yet been used to discover new attack methods, but it is used a lot for refine existing fine ones.
“Threat actors are experiencing with Gemini to enable their operations, finding productivity profits but still not developing novel abilities,” the team said in their analysis. “At present, they mainly use the research, the problem solving code and the creation and location of content.”
APT42 and many other threats
The largest users of Gemini among cybercounts are Iranians, Russians, Chinese and North Koreans, who use the platform for recognition, vulnerability investigation, command sequences and development, translation and explanation, and the explanation, and the explanation, and deeper access of the system and subsequent actions to commitment.
In total, Google observed 57 groups, more than 20 of which were from China, and among the more than 10 North Korean threat actors who use Gemini, a group stands out: APT42.
More than 30% of the use of the country’s gemini threat actor was linked to APT42, Google said. “The activity of Gemini of APT42 reflected the group’s approach in preparing successful phishing campaigns. We observe the group using Gemini to recognize individual policies and defense experts, as well as organizations of interest to the group. “
APT42 also used text generation and editing capabilities to create Phishing messages, particularly those aimed at defense organizations in the United States. “APT42 also used Gemini for translation, including the location or adaptation of content for a local audience. This includes content adapted to local culture and local language, such as asking for translations that are in English fluently. “
Since ChatGPT was published for the first time, security investigators have been warning about cybercrime abuse. Before Genai, the best way to detect Phishing attacks was to look for spelling and grammar errors, and an inconsistent writing. Now, with AI doing writing and edition, the method practically does not work, and security professionals are resorting to new approaches.