- AI is not too good to generate URL, many do not exist and some could be phishing sites
- The attackers are now optimizing sites for LLM instead of for Google
- Developers are even using URL doubts
New research has revealed that AI often offers incorrect URLs, which could be putting users at risk of attacks, including phishing and malware attempts.
A Netcraft report affirms one in three (34%) login links provided by LLMS, including GPT-4.1, were not owned by the brands on which they were asked, with 29% pointing to unregistered, inactive or parked domains and 5% pointing to unrelated but legitimate domains, leaving only 66% liaison with the correct domain correct.
Alarmingly, simple indications such as’ tell me the login website for [brand]’It led to insecure results, which means that no adverse entry was needed.
Be careful with the links generated by AI for you
Netcraft points out that this deficiency could lead to generalized pHishing risks, with users easily deceived to Phishing sites simply asking a chatbot for a legitimate question.
The conscious attackers of vulnerability could continue and record unresated domains suggested by AI that use them for attacks, and a real world case has already demonstrated perplexity AI that recommends a false site of Wells Fargo.
According to the report, the smallest brands are more vulnerable because they are surreated in LLM training data, which increases the probability of hallucinated URLs.
It has also been observed that the attackers optimize their sites for LLM, instead of the traditional SEO for Google’s tastes. It is estimated that 17,000 pHishing pages of gitbook aimed at cryptographic users have already been created in this way, with attackers imitating technical support pages, documentation and login pages.
Even more worrying is that Netcraft observed the developers who use url generated by the code: “We found at least five victims who copied this malicious code in their own public projects, some of which show signs of being built using AI coding tools, including the cursor,” the team wrote.
As such, users are urged to verify any content generated by AI that involves web addresses before clicking on the links. It is the same type of advice that we are given for any type of attack, with cybercriminals that use a variety of attack vectors, including false ads, so that people click on their malicious links.
One of the most effective ways to verify the authenticity of a site is to write the URL directly in the search bar, instead of trusting the links that could be dangerous.