- Hidden URL fragments allow attackers to manipulate AI assistants without user knowledge
- Some AI assistants transmit sensitive data to external endpoints automatically
- Misleading indications and fake links can appear on otherwise normal websites.
Many AI browsers are facing scrutiny after researchers detailed how a simple fragment of a URL can be used to influence browser assistants.
New research from Cato Networks found that the “HashJack” technique allows malicious instructions to remain silent after a hashtag in an otherwise legitimate link, creating a path for covert commands that remain invisible to traditional monitoring tools.
The wizard processes hidden text locally, meaning the server never receives it and the user continues viewing a normal page while the browser follows instructions they never typed.
Wizard behavior when chunks are processed.
Testing showed that certain assistants attempt autonomous actions when exposed to these fragments, including actions that transmit data to external locations controlled by an attacker.
Others present misleading directions or promote links that imitate reliable sources, giving the impression of a normal session but altering the information provided to the user.
The browser continues to display the correct site, making the intrusion difficult to detect without close inspection of the wizard’s responses.
Major technology companies have been notified about the issue, but their responses varied significantly.
Some vendors rolled out updates to their AI browser features, while others judged expected behavior based on existing design logic.
The companies said the defense against indirect manipulation depends on how each AI assistant reads the instructions on the hidden pages.
Common traffic inspection tools can only observe URL fragments leaving the device.
Therefore, conventional security measures provide limited protection in this scenario because the URL fragments never leave the device for inspection.
This forces advocates to go beyond network-level review and examine how AI tools integrate with the browser itself.
Tighter monitoring requires attention to local behavior, including how assistants process hidden context invisible to users.
Organizations need to use stronger endpoint protection and stricter firewall rules, but this is just one layer and does not address the visibility gap.
The HashJack method illustrates a vulnerability unique to AI-assisted browsing, where legitimate websites can be weaponized without leaving conventional traces.
Being aware of this limitation is critical for organizations deploying AI tools, as traditional monitoring and defense measures cannot fully capture these threats.
How to stay safe
- Limit personal information shared online.
- Monitor financial accounts for unusual activity.
- Use unique and complex passwords for all accounts.
- Check URLs before logging into websites.
- Be wary of unsolicited messages or calls claiming to come from financial institutions.
- Deploy antivirus software to protect devices from malware.
- Enable firewalls to block unauthorized access.
- Use identity theft protection to monitor personal information.
- Recognize that sophisticated phishing campaigns and AI-powered attacks still pose risks.
- Effectiveness depends on consistent deployment across devices and networks.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




