- 58-59% of workers admit to using shadow AI at work
- Data sets, employee names, and financials are shared with unapproved tools.
- Could IT teams meet workers where they are to ensure better compliance?
With AI tools now commonplace in many businesses, new research from BlackFog has noted that while the majority (86%) of employees say they now use AI for work testing at least weekly, three-fifths (58%) admit they are using unapproved AI or free, publicly available tools instead of company-provided tools, putting their business at risk.
Company-provided tools are important for providing enterprise-grade security, governance, and privacy protection, but many workers complain that the AI they receive is not adequate for what they need.
But more importantly, 63% believe it is acceptable to use AI without IT approval and 60% agree that unapproved AI is worth the security risk if it helps them meet deadlines, suggesting there is a clear disconnect between company goals and how they communicate them to staff.
Shadow AI Abounds in Employee Workflows
Shadow AI “should raise red flags for security teams and highlight the need for greater monitoring and visibility into these security blind spots,” wrote Dr. Darren Williams, CEO of BlackFog.
This comes as 33% of workers admit to sharing research or data sets with unapproved AI, 27% have shared employee data such as names, payroll or performance, and 23% have shared financial or sales data.
But while it may be up to IT teams to double down on the rules and expectations when it comes to AI, they face an uphill battle with more C-suite executives and senior leaders believing speed trumps privacy and security than management and junior staff.
And BlackFog isn’t the only company to reveal widespread use of shadow AI. cyber news It also found that 59% of workers use unapproved AI at work, suggesting that an even higher 75% of users have shared sensitive information with these unapproved tools.
Similarly, the report found that 57% of workers’ direct managers support the use of unapproved AI. “That creates a gray area where employees are encouraged to use AI, but companies lose oversight of how and where sensitive information is shared,” warned security researcher Mantas Sabeckis.
Looking ahead, there are two clear solutions to ending shadow AI. First, IT teams must reiterate the risks involved and guide users toward approved tools, but second, it is clear that the currently approved tools are not suitable for many workers, so IT teams must meet them where they are and offer enterprise-grade versions of those applications.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




