- Palo Alto Warns Rapid AI Adoption Expands Cloud Attack Surfaces, Creating Unprecedented Security Risks
- Excessive permissions and incorrect configurations lead to incidents; 80% are related to identity issues, not malware
- Non-human identities outnumber human ones and are poorly managed, creating exploitable entry points for adversaries.
Rapid enterprise adoption of cloud-native Artificial Intelligence (AI) tools and AI services is significantly expanding attack surfaces in the cloud and putting businesses at greater risk than ever.
This is according to the ‘State of Cloud Security Report,’ a new paper published by cybersecurity researchers Palo Alto Networks.
According to the article, there are some key problems with AI adoption; the speed at which AI is deployed, the permissions it is given, misconfigurations, and the rise of non-human identities.
Permissions, misconfigurations, and non-human identities
Palo Alto says organizations are deploying workloads faster than they can protect them, often without complete visibility into how tools access, process or share sensitive data.
In fact, the report states that more than 70% of organizations are currently using AI-powered cloud services in production, which represents a strong increase year over year. This speed at which these tools are deployed is now considered a major factor contributing to an “unprecedented increase” in cloud security risk.
Then there is the problem of excessive permissions. AI services often require extensive access to cloud resources, APIs, and data stores: the report shows that many organizations grant overly permissive identities to AI-powered workloads. According to research, 80% of cloud security incidents last year were related to identity-related issues, not malware.
Palo Alto also noted misconfigurations as a growing problem, especially in environments that support AI development. AI storage buckets, databases, and training pipelines are often left exposed, something that threat actors are increasingly exploiting, rather than simply attempting to deploy malware.
Finally, the research points to an increase in non-human identities such as service accounts, API keys, and automation tokens used by AI systems. In many cloud environments, there are now more non-human identities than human ones, and many are poorly monitored, rarely rotated, and difficult to attribute.
“The rise of large language models (LLM) and agent AI pushes the attack surface beyond traditional infrastructure,” the report concludes.
“Adversaries target LLM tools and systems, the underlying infrastructure that supports model development, the actions these systems take, and, crucially, their memory stores. Each represents a potential point of compromise.”
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




