- EU AI Law requires explainability and accountability of AI
- Only 38% of workers can accurately determine who is responsible for their business
- More than half (59%) aren’t even sure how quickly they could shut down AI in a crisis.
Despite the rapid adoption of AI, new research from ISACA suggests many businesses could be going in blind: more than half (59%) of UK businesses wouldn’t even know how quickly they could stop AI during a crisis.
Only around one in five (21%) say they feel confident stopping an AI system within 30 minutes, highlighting major security gaps.
And the problem is not just closing them: not even half (42%) say they could explain an AI failure to leaders or regulators.
Article continues below.
Are companies blind to the risks of AI?
ISACA explained that the gaps are not only worrying at the level of operation and business reputation, but also at the legislative level. The EU AI Law requires explainability and accountability.
Part of the failure is due to unclear responsibility: 20% of workers are not sure who is responsible for AI failures. Poor visibility also contributes to this, with one in three organizations not requiring disclosure of AI use at work, which ISACA says is a blind spot nightmare.
The report explains that companies currently treat it as a technical problem, but should focus on it being a governance challenge for the entire organization. “It is not possible to truly close the gap with process changes alone,” wrote global strategy director Chris Dimitriadis. “Rather, it will require professionals who have the expertise to rigorously assess AI risk and integrate oversight across the entire lifecycle.”
Looking ahead, companies are urged to define responsibility at a higher level and begin implementing better visibility and auditing. In addition to this, they must also incorporate AI incident response into their strategies and include it in their broader cybersecurity postures.
With only 38% of respondents identifying the board of directors or an executive as responsible in the event of an AI incident, it is clear that more needs to be done to disseminate information and processes throughout the workforce.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




