- TrendAI report finds that 67% of companies are pressured to implement GenAI despite security concerns
- Key risks include exposure of sensitive data, malicious messages, expanded attack surface, and abuse of autonomous code.
- Governance gaps: Only 38% have AI policies, 57% say AI evolves faster than can be guaranteed, and many lack visibility or stopping mechanisms.
Companies are rushing to integrate Generative Artificial Intelligence (GenAI) into their processes and operations, despite knowing the risks they are exposing themselves to, and to make matters worse, many are unsure of how to move forward and minimize their risks, further exacerbating the problem.
A new report from TrendAI surveyed 3,700 business and IT decision-makers in 23 countries and found that the majority (67%) were being pressured to approve AI integration despite security concerns.
One in seven (about 15%) described these concerns as “extreme” but still approved of the deployment.
Article continues below.
Not for lack of awareness
The report outlined numerous risks associated with artificial intelligence tools that keep entrepreneurs up at night. For two in five, the biggest risk is that AI agents access sensitive data, while a third (36%) are concerned that malicious prompts will compromise security.
AI agents are programs that allow AI to operate applications or even entire computers. Malicious prompts, shared through phishing emails, for example, could result in AI agents sending sensitive data to hacking groups, changing application settings, or even downloading malware.
For a third of respondents (33%), AI creates an increasing attack surface that criminals can exploit. The same percentage also fears the abuse of trusted AI status and the risks related to implementing autonomous code.
“Organizations don’t lack risk awareness, they lack the conditions to manage it. When implementation is driven by competitive pressure rather than governance maturity, it creates a situation where AI is embedded in critical systems without the controls necessary to manage it safely,” said Rachel Jin, platform and business director, director at TrendAI.
Management and governance are harder to achieve than it seems, at least with AI. For more than half (57%), AI is advancing faster than can be guaranteed. That means that as soon as a system is established, new potential risks emerge, forcing defenders to reevaluate their position. What’s more, 55% reported only moderate confidence in their understanding of AI legal frameworks, and only a third (38%) currently have comprehensive AI policies in place.
Regulation and compliance
Finally, two in five (41%) see unclear regulation and compliance standards as a barrier to progress. This way of thinking creates a bit of a trap for organizations as they end up using “shadow AI,” unauthorized tools that defenders have no knowledge of. That way, they don’t know what is shared or what data ends up being sent into the ether.
To be able to say they have safely integrated AI into their workflows, companies need two things, the researchers suggest: observability and auditability, and a “kill switch” mechanism. At the moment, almost a third of respondents (31%) said they lacked visibility into all of their AI systems.
When it comes to kill mechanisms, around 40% support the idea, but half (50%) are unsure how to implement one.
Despite regulatory and governance challenges and risks, sentiment around AI remains positive. In fact, nearly half (44%) believe agent AI will “significantly improve” cyber defense in the near term.
“Agent AI is taking organizations into a new category of risk,” Jin added. “Our research shows that the concerns are already clear, from the exposure of sensitive data to the loss of oversight. Without visibility and control, organizations are deploying systems that they do not fully understand or control, and that risk will only increase unless action is taken.”

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




