- Chatgpt is asked some interesting security questions.
- Users are concerned about phishing, scams and privacy
- Personal information is feeding with AI agent, putting users at risk
AI is quickly becoming a personal advisor for many people, offering help with daily schedules, reformulating those difficult emails and even acting as companions of niche hobbies.
While these uses are usually harmless, many people have started using chatgpt to act as a security guru, but not do it in a particularly safe way.
New Nordvpn research has discovered some of the questions asked by chatgpt on security, from dodging Phishing attacks to wondering if an intelligent toaster could become a domestic threat.
Do not feed your data to chatgpt
The main security question made by ChatGPT users is “how can I recognize and avoid phishing scams?” – Which is understandable since phishing is probably the most common cyber threat than any normal person could face.
The rest of the questions follow a similar trajectory, from the vision of the best VPN, to the advice on the best way to ensure online personal information. It is definitely refreshing to see that AI is used as a force for good at a time when computer pirates are deciphering AI tools to pump malware.
However, not everything is good news, I’m afraid. NordvPN’s research also highlighted some of the strangest security questions that people ask Chatgpt, such as: “Can computer pirates steal my thoughts through my smartphone?” And, “if I eliminate a virus by pressing the delete key, is my computer safe?”
Others express concerns about computer pirates who potentially listen to them whispering their password as they write it, or computer pirates who use ‘the cloud’ to sniff their phones while loading during an electric side.
“While some questions are serious and insightful, others are hilarious strange, but they all reveal a worrying reality: many people still misunderstand the cybersecurity. This knowledge gap leaves them exposed to scams, identity theft and social engineering. Worse, users share without knowing personal data while looking for help,” says Marijus Briedis, cto in Nordvpn.
Many users often ask AI models questions that include confidential personal information, such as physical addresses, contact information, credentials and bank information.
This is particularly dangerous, since most AI models will store the chat history and use it to help train AI to better answer the questions. The key issue is that computer pirates could use very carefully designed indications to extract confidential information from AI and use it for all types of disastrous ends.
“Why does this matter? Because what may seem like a harmless question can quickly become a real threat,” says Briedis. “The scammers can exploit the information that users share, be it an email address, login credentials or payment details, to launch phishing attacks, kidnap accounts or commit a financial fraud. A simple chat can end up compromising all its digital identity.”