- Almost 40% of IT workers admit to secretly used unauthorized ia tools
- Shadow AI is growing as the use of training gaps and the fear of undercover fuel dismissals
- IA tools used without supervision can filter confidential data and omit existing security protocols
As artificial intelligence is increasingly integrated into the workplace, organizations are struggling to manage their adoption in a responsible manner, according to new research.
An Ivanti report has affirmed that the growing use of unauthorized AI tools in workplaces is generating concerns about the deepening of skills gaps and the increase in safety risks.
Among IT workers, more than a third (38%) admit to using unauthorized generated AI tools, while almost half of office workers (46%) say that their employers did not provide some or all the tools of AI in which they trust.
Some companies allow the use of AI
Interestingly, 44% of companies have integrated AI in the departments, however, a large part of the employees secretly use unauthorized tools due to insufficient training.
One in three workers says that they hide their use of management, often citing the “secret advantage” it provides.
Some employees avoid revealing their use of AI because they do not want to be perceived as incompetent.
With 27% reports of the imposter syndrome fed with AI and 30% worried that their roles can be replaced, disconnection also contributes to anxiety and exhaustion.
These behaviors point to the lack of trust and transparency, emphasizing the need for organizations to establish policies for the use of clear and inclusive.
“Organizations should consider building a sustainable government model, prioritizing transparency and addressing the complex challenge of the imposter syndrome fed with AI through reinvention,” said Ivanti’s main legal advisor, Brooke Johnson.
The undercover use of AI also has a serious risk. Without adequate supervision, unauthorized tools can filter data, omit security protocols and expose systems to attack, especially when administrators use it with high access.
Organizations should not respond not by taking measures, but modernizing. This includes establishing inclusive AI policies and implementing a safe infrastructure, starting with strong final point protection to detect dishonest applications and ZTNA solutions to enforce strict access controls in distributed environments.
Ivanti points out that AI is not the problem; Real problems are unclear policies, weak security and lack of trust. If it is not controlled, Shadow AI could expand the skills gap, tighten mental health and compromise critical systems.