- 43% of organizations still have no plans for AI policies, report says
- At the moment, workers are adopting AI faster than companies are writing policies.
- Nexos.ai asks SMEs to implement basic policies – they can evolve from there
Although 70% of legal workers already use general-purpose AI for work, 43% of organizations say they do not yet have formal AI policies (and have no plans to create them).
New research from Nexos.ai has revealed that the biggest risk related to AI tools could actually come from a lack of visibility and governance.
And SMEs are generally the most at risk simply due to their nature of having fewer resources, both in terms of workers and procedures.
Article continues below.
AI is largely unmanaged
Nexos.ai discovered that workers regularly pasted contracts, NDAs or legal correspondence into public chatbots to save time, putting sensitive information at risk. While enterprise-grade AI products promise maximum data security and require no training on customer data, public versions are not as strict.
Data security (46%) was cited as the biggest concern for legal teams, ahead of ethical issues (42%) and legal privileges (39%), but the way workers interact with public chatbots doesn’t square with the concerns.
Nexos.ai also noted that SMBs may already have legal AI workflows in use without them being formally established and recognized, because AI adoption happens gradually and without governance, leaving companies playing catch-up to control the correct and safe use of AI after employees have already started using the tools.
“The risk for SMBs is not reckless use of AI, but an invisible change in workflow,” wrote product manager Zilvinas Girenas.
But it doesn’t have to be difficult: the report explains that a basic AI policy does not have to be complex. Defining approved tools, prohibiting use cases, and pointing out restrictions on sensitive data might be enough – or at least, they might be better than current governance scenarios.
Looking ahead, Nexos.ai suggests companies start with a simple AI policy to keep sensitive data out of the reach of unapproved tools. Prior to widespread adoption of AI, the report requires companies to approve tools before teams adopt them, but once implemented, Nexos.ai still recommends that humans supervise before AI-generated content is used in legal applications.
“If those tools are integrated before the company has defined approved use, data limits and review steps, efficiency comes faster than governance,” Girenas concluded.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




