- Operai has banned accounts using chatgpt for malicious purposes
- Erroneous and surveillance information campaigns were discovered
- Threat actors are increasingly using AI by damage
Operai has confirmed that he recently identified a set of accounts involved in malicious campaigns, and prohibited responsible users.
The prohibited accounts involved in the ‘pairs’ and ‘sponsoring discontent’ campaigns probably originate in China, Openai said, and “they seem to have used or attempted to use, models built by OpenAi and another Laboratory of the United States. In relation to an apparent surveillance operation and to generate anti -Americans, altering the malicious uses of our models: an update of February of 2025 3 articles in Spanish “.
IA has facilitated an increase in misinformation, and is a useful tool for threat actors to use to interrupt elections and undermine democracy in unstable or politically divided nations, and campaigns sponsored by the State have used technology for their advantage.
Surveillance and misinformation
The ‘Pare Review’ campaign used chatgpt to generate “detailed descriptions, consisting of sales arguments, a listening tool in social networks that claimed to have used to feed real -time reports on protests in the West to the services of Chinese security, “OpenAi confirmed.
As part of this surveillance campaign, the threat actors used the model to “edit and purify code and generate promotional materials” for the alleged listening tools in social networks with AI, although Openai could not identify publications on social networks later of the campaign.
Chatgt accounts that participate in the ‘Discontent sponsored’ campaign were used to generate comments in English and news articles in Spanish, consisting of the ‘spamuflage’ behavior, mainly using anti -American rhetoric, probably to trigger discontent in Latin America, namely and Ecuador.
This is not the first time that Chinese actors are identified sponsored by the State using ‘spamuflage’ tactics to spread misinformation. At the end of 2024, a Chinese influence campaign was discovered aimed at US voters with thousands of images and videos generated by AI, mostly low quality and contained in false information.