- 30% of the British provide chatbots from AI with confidential personal information
- NYMVPN research shows that company data and customers are also at risk
- Emphasize the importance of taking precautions, such as using a quality VPN
Almost one in three British shares confidential personal data with AI chatbots As OpenAI’s chatgpt, according to a Cybersecurity Company investigation NYMVPN. 30% of the British have fed the chatbots of AI with confidential information such as health and banking data, which could put their privacy at risk, and that of others.
This overvalue with people like Chatgpt and Google Gemini Despite 48% of respondents who express privacy problems about AI chatbots. This indicates that the problem extends to the workplace, with employees who share confidential data of companies and customers.
NYMVPN findings are produced a series of recent high profile data violations, especially the Spencer marks and cyber attackwhich shows how easily confidential data can fall into the wrong hands.
“The convenience is prioritizing over security”
NYMVPN research reveals that 26% of respondents admitted to disclosed financial information related to salary, investments and mortgages to the chatbots of AI. Even more risky, 18% of the shared credit card or bank account data.
24% of those surveyed by NYMVPN admit to having shared customer data, including email names and addresses, with AI chatbots. Even more worrying, 16% charged financial data and internal documents, such as contracts. This despite 43% expressing concern about the company’s confidential data that are filtered by Ai Tools.
“The AI tools have quickly become part of how people work, but we are seeing a worrying trend where convenience is prioritized on security,” said Harry Halpin, CEO of NYMVPN.
Em, Cooperativeand Adidas Everyone has been in the headlines for the wrong reasons, having been victims of data violations. “High profile infractions show how vulnerable they can even be the main organizations, and the more personal and corporate data feed on AI, the greater the objective for cybercriminals,” said Halpin.
The importance of not overvalue
Since almost a quarter of respondents share customer data with AI chatbots, this emphasizes the urgency of companies that implement clear guidelines and formal policies for the use of AI in the workplace.
“Employees and companies need to think urgently how they are protecting both personal privacy and company data when they use AI tools,” said Halpin.
Although avoiding chatbots of AI would be the optimal solution for privacy, it is not always the most practical. Users must, at least, avoid sharing confidential information with chatbots from AI. Privacy configuration can also be adjusted, such as disableing chat history or opting for not receiving training in models.
A VPN can add a privacy layer when using AI chatbots such as chatgpt, encrypt an user’s internet traffic and original IP address. This helps maintain the location of a private user and prevents your ISP from seeing what you are doing online. Even so, even the Better VPN Isn’t it enough if confidential personal data still feed on AI?