- Almost half of IT teams do not know completely what they are accessing their daily agents
- Companies love AI agents, but they also fear what they are doing behind closed doors of digital doors
- IA tools now need governance, audit trails and control as well as human employees
Despite the growing enthusiasm for the AFFEE in all companies, new research suggests that the rapid expansion of these tools is overcoming efforts to ensure them.
A Sailpoint survey of 353 IT professionals with business security responsibilities has revealed a complex combination of optimism and anxiety about AI agents.
The survey reports that 98% of organizations intend to expand their use of AI agents in next year.
AI agents adoption exceed security preparation
The AI agents are being integrated into operations that handle confidential business data, from customer records and finance to legal documents and transactions of the supply chain, however, 96% of respondents said they believe that these same agents are a growing security threat.
A central problem is visibility: only 54% of professionals claim to be fully aware of the data that can access their agents, which leaves almost half of the business environments in the dark about how AI agents interact with critical information.
By aggravating the problem, 92% of respondents agreed that AI agents govern is crucial for security, but only 44% have a real policy.
In addition, eight out of ten companies say that their AI agents have taken measures to which they were not intended, this includes access to unauthorized systems (39%), share inappropriate data (33%) and download confidential content (32%).
Even more worrying, 23% of respondents admitted that their AI agents have been deceived to reveal access credentials, a potential gold mine for malicious actors.
A remarkable idea is that 72% believe that AI agents have greater risks than the traditional identities of the machine.
Part of the reason is that AI agents often require multiple identities to function efficiently, especially when they are integrated with high performance tools or systems used for development and writing.
Calls to a change to an identity model are first stronger, but Sailpoint and others argue that organizations need to treat AI agents as human users, complete with access controls, responsibility mechanisms and complete audit trails.
AI agents are a relatively new addition to business space, and organizations will take time to completely integrate them into their operations.
“Many organizations are still early on this trip, and the growing concerns about data control highlights the need for stronger and more integral identity security strategies,” Sailpoint concluded.