- Operai says he has interrupted numerous malicious campaigns using chatgpt
- These include job scams and influence campaigns
- Russia, China and Iran are using chatgpt to translate and generate content
Operai has revealed that he has eliminated a series of malicious campaigns using their AI offers, including chatgpt.
In a report entitled “Interruption of malicious uses of the AI: June 2025”, OpenAi establishes how he dismantled or interrupted 10 scams of employment, influence of spam operations and campaigns using chatgpt in the first months of 2025 alone.
Many of the campaigns were made by actors sponsored by the State with links with China, Russia and Iran.
Disruption of the AI campaign
Four of the campaigns interrupted by OpenAI seem to have originated in China, with their focus on social engineering, undercover influence operations and cyber threats.
A campaign, called “OpenAi” Review “, saw the” inverted “Taiwanese board game” that includes resistance against the Chinese Communist Party sent by email by highly critical Chinese comments.
The network behind the campaign generated an article and published it in a forum that states that the game had received a generalized reaction based on critical comments in an effort to discredit both the game and Taiwanese independence.
Another campaign, called “Helgoland Bite”, saw Russian actors using chatgpt to generate text in German that criticized the United States and NATO, and generate content on the German elections of 2025.
In particular, the group also used chatgpt to search for activists and opposition bloggers, as well as generate messages that referred to coordinated social media publications and payments.
Operai has also banned numerous chatpt accounts linked to influence accounts directed in an operation known as “Uncle Spam”.
In many cases, Chinese actors would generate highly divisive content aimed at expanding political division in the United States, including the creation of social media accounts that would publish arguments for and against tariffs, as well as to generate accounts that imitated veteran support pages of the United States.
OpenAi’s report is a key reminder that not everything online is published by a real human being, and that the person with whom he has chosen an online fight could be obtaining exactly what he wants; Commitment, outrage and division.