- OpenAI bans accounts linked to China and North Korea for malicious AI-assisted surveillance and phishing
- Chinese actors used ChatGPT to draft proposals for monitoring tools and behavioral profiling systems.
- North Korean Actors Tried Phishing, Credential Theft, and Developing MacOS Malware Using Reworded Messages
OpenAI has banned Chinese, North Korean, and other accounts that allegedly used ChatGPT to launch surveillance campaigns, develop phishing and malware techniques, and engage in other malicious practices.
In a new report, OpenAI said it observed people allegedly affiliated with Chinese government entities or state-linked organizations using its large language model (LLM) to help draft proposals for surveillance systems and profiling technologies.
These included tools to monitor individuals and analyze behavioral patterns.
Exploring phishing
“Some of the accounts we banned appeared to be attempting to use ChatGPT to develop large-scale monitoring tools: analyzing data sets, often collected from Western or Chinese social media platforms,” the report reads.
“These users typically asked ChatGPT to help them design such tools or generate promotional materials about them, but not to implement tracking.”
The prompts were formulated in a way that avoided triggering security filters and were often formulated as academic or technical queries.
While the returned content did not directly enable surveillance, its results were said to have been used to refine the documentation and planning of such systems.
The North Koreans, on the other hand, used ChatGPT to explore phishing techniques, credential theft, and macOS malware development.
OpenAI said it observed these accounts testing messages related to social engineering, password harvesting, and malicious code debugging, especially targeting Apple systems.
The model rejected direct requests for malicious code, OpenAI said, but emphasized that threat actors still attempted to bypass safeguards by rephrasing messages or asking for general technical help.
Just like any other tool, LLMs are used by both financially motivated and state-sponsored threat actors, for all types of malicious activities.
This misuse of AI is evolving, with threat actors increasingly integrating AI into existing workflows to improve their efficiency.
While developers like OpenAI work hard to minimize risk and ensure their products cannot be used in this way, there are many indications that fall between legitimate and malicious use. This gray zone activity, the report suggests, requires nuanced detection strategies.
Through The Registry
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.