- OpenAI warns that future LLMs could help zero-day development or advanced cyber espionage
- The company is investing in defensive tools, access controls and a tiered cybersecurity program.
- New Border Risk Council will guide safeguards and responsible capacity across all border models
Future OpenAI Large Language Models (LLMs) could pose greater cybersecurity risks as they could theoretically develop remote zero-day exploits against well-defended systems, or significantly assist with complex and stealthy cyberespionage campaigns.
This is according to OpenAI itself, who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly.”
While this may seem ominous, OpenAI actually sees it in a positive light and says that the advances also bring “significant benefits for cyber defense.”
Block the browser
To prepare in advance for future models that could be abused in this way, OpenAI said it is “investing in strengthening models for defensive cybersecurity tasks and creating tools that allow defenders to more easily perform workflows such as auditing code and patching vulnerabilities.”
The best way to do this, according to the blog, is a combination of access controls, infrastructure hardening, exit controls, and monitoring.
Additionally, OpenAI announced that it would soon introduce a program that should provide users and clients working on cybersecurity tasks with access to enhanced capabilities, in a phased manner.
Finally, the Microsoft-backed AI giant said it plans to establish an advisory group called the Frontier Risk Council. This group should be made up of experienced cybersecurity experts and professionals and, after initially focusing on cybersecurity, should expand its scope to other areas.
“Members will advise on the boundary between useful and responsible capability and potential misuse, and these learnings will directly inform our assessments and safeguards. We will share more on the advice soon,” the blog reads.
OpenAI also said that cyber abuse could be viable “from any industry frontier model,” which is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industry partners.
“In this context, threat modeling helps mitigate risk by identifying how AI capabilities could be weaponized, where critical bottlenecks exist for different threat actors, and how frontier models could provide significant improvement.”
Through PakGazette
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




