- Chatgpt flaw on the server side allows attackers to steal data without any user interaction
- Shadowleak omits the traditional security of the end point
- Millions of commercial users could be exposed due to shadowleak exploits
Companies use more and more AI tools such as Chatgpt’s deep research agent to analyze emails, CRM data and internal reports for strategic decision making, experts have warned.
These platforms offer automation and efficiency, but also introduce new security challenges, particularly when it comes to confidential commercial information.
Radware recently revealed a zero click defect in the deep Chatgpt research agent, called “Shadowleak”, but unlike traditional vulnerabilities, this failures exfiltrates the undercover confidential data.
Shadowleak: An exploit of the click zero server side
It allows the attackers to completely completely confidential confidential data of the Operai servers, without requiring any user interaction.
“This is the click zero attack,” said David Aviv, director of radio technology.
“User’s action is not required, nor a visible signal, and there is no way for the victims to know that their data has been compromised. Everything happens completely behind the scene through actions of autonomous agents in OpenAi cloud servers.”
Shadowleak also operates independently of the final points or networks, which makes the detection extremely difficult for business security equipment.
The researchers showed that simply sending an email with hidden instructions could trigger the deep research agent to filter information autonomously.
Pascal Genens, director of intelligence of cyber threats at Radware, explained that “companies that adopt AI cannot only trust the safeguards incorporated only to prevent abuse.
“The workflows driven by AI can still be manipulated not yet anticipated, and these attack vectors often avoid the visibility and detection capacities of traditional security solutions.”
Vulnerability represents the first exfiltration of zero click data on the server side, without leaving almost any evidence from the perspective of the companies.
With ChatGPT informing more than 5 million commercial users who pay, the potential exposure scale is substantial.
Human supervision and strict access controls remain critical when confidential data are connected to AI autonomous agents.
Therefore, organizations that adopt AI must address these tools with caution, continuously evaluate security gaps and combine technology with informed operational practices.
How to stay safe
- Implement the cybersecurity defenses in layers to protect against multiple types of attacks simultaneously.
- Regularly monitor the workflows driven by AI to detect unusual activities or potential data leaks.
- Implement the best antivirus solutions in all systems to protect against traditional malware attacks.
- Maintain robust ransomware protection to safeguard the confidential information of lateral motion threats.
- Comply with strict access controls and user permissions for AI tools that interact with confidential data.
- Ensure human supervision when self -employed agents access or process confidential information.
- Implement the registration and audit of AI agent activity to identify early anomalies.
- Integrate additional AI tools for the detection of automated anomalies and safety alerts.
- Educate employees on threats related to AI and the risks of the workflows of autonomous agents.
- Combine software defenses, best operational practices and continuous surveillance to reduce exposure.