OpenAI admits new models likely pose ‘high’ cybersecurity risk



  • OpenAI warns that future LLMs could help zero-day development or advanced cyber espionage
  • The company is investing in defensive tools, access controls and a tiered cybersecurity program.
  • New Border Risk Council will guide safeguards and responsible capacity across all border models

Future OpenAI Large Language Models (LLMs) could pose greater cybersecurity risks as they could theoretically develop remote zero-day exploits against well-defended systems, or significantly assist with complex and stealthy cyberespionage campaigns.

This is according to OpenAI itself, who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly.”



Leave a Comment

Your email address will not be published. Required fields are marked *