- GenAI SaaS Usage Tripled and Fast Volumes Increased Sixx in One Year
- Nearly half of users rely on unauthorized ‘shadow AI’, creating significant visibility gaps
- Sensitive data leaks doubled, with insider threats linked to the use of personal cloud applications
Generative Artificial Intelligence (GenAI) can be great for productivity, but it comes with some serious security and compliance complications. This is according to a new report from Netskope, which says that as GenAI use in the office skyrockets, so do incidents of policy violations.
In its Cloud and Threat Report: 2026, published earlier this week, Netskope said that GenAI software-as-a-service (SaaS) usage among enterprises is “increasing rapidly,” and that the number of people using tools like ChatGPT or Gemini tripled over the course of the year.
Users are also spending much more time with the tools: the number of messages people send to apps has also increased six-fold in the last 12 months, from 3,000 a year ago to more than 18,000 messages a month today.
shadow AI
What’s more, the top 25% of organizations send more than 70,000 messages per month and the top 1% send more than 1.4 million messages per month.
But many of the tools and their use cases were not approved by the appropriate departments and executives. Nearly half (47%) of GenAI users use personal AI applications (so-called “shadow AI”) that do not give the organization visibility into the type of data shared or the tools that read these files.
As a result, the number of incidents in which users send sensitive data to AI applications has doubled in the last year.
Now, the average organization is experiencing a staggering 223 incidents per month. Netskope also said that personal applications represent a “significant insider threat risk,” as 60% of insider threat incidents involved personal application instances in the cloud.
Regulated data, intellectual property, source code, and credentials are frequently sent to personal application instances in violation of organizational policies.
“Organizations will struggle to maintain data governance as sensitive information flows freely into unapproved AI ecosystems, leading to increased accidental data exposure and compliance risks,” the report concludes.
“Attackers, on the other hand, will exploit this fragmented environment, leveraging AI to perform hyper-efficient reconnaissance and design highly personalized attacks targeting proprietary models and training data.”
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




