- False AI video editor’s ads are aimed at Facebook users
- Threat Group UNC6032 has been identified to disseminate malware
- The ads have reached more than 2 million users
The Google Mandiant Threat Defense Group has identified a campaign, tracked as UNC6032, which “armed interest in AI tools,” specifically the tools used to generate videos based on the user’s instructions.
The commanders experts identified thousands of false websites publications of “AI videos” that actually distribute malware, which has led to the implementation of useful charges, “such as Python and several rear infotealars.
The campaign sees legitimate tools of the generator of AI such as Canva Dream Lab, Luma Ai and Kling Ai are sonsoned to deceive the victims, who have collectively reached “millions of users” on LinkedIn and Facebook, although Google suspects similar campaigns can also be attacking users on multiple different platforms.
It is believed that the group, UNC6032, has ties with Vietnam, but the EU transparency rules allowed researchers to see that a sample of 120 malicious ads had a total reach of more than 2.3 million users, although this does not necessarily translate into so many victims.
“Although our research was limited, we discovered that the” well -elaborate “websites represent a significant threat to both organizations and individual users,” confirm the researchers.
“These AI tools are no longer addressed to only graphic designers; anyone can be attracted to an apparently harmless ad. The temptation to test the last AI tool can lead to anyone becoming a victim. We advise users who have caution when they get involved with AI tools and verify the legitimacy of the website’s domain.”
Be sure to thoroughly examine any advertisement on social networks and manually search any software offer in a search engine before downloading anything to properly verify the source.
We also recommend reviewing the best malware elimination tools to maintain their safe devices.