Health workers are committing a worrying amount of work security errors




  • Netskope report finds almost all health workers use the tools of the AI ​​trained in user data
  • Information protected with hypa, passwords, IP and more at risk
  • Organizations need to approve the fastest AI tools

New Netskope investigation blamed health workers for putting their companies at risk by trying to regularly load confidential and regulated data to unpropped locations, including generative chatbots such as Chatgpt and Gemini.

Highlighting the scope of the use of tools not approved, the report revealed that 96% of respondents used applications that take advantage of the user data for training.

Leave a Comment

Your email address will not be published. Required fields are marked *