- OpenClaw Exposures Reveal Thousands of High-Risk Systems Accessible to the Internet
- AI agents are being deployed with excessive permissions in critical environments
- Remote Code Execution Vulnerabilities Expose Most Observed OpenClaw Implementations
Agent systems are rapidly moving from experimentation to everyday workflows, but recent findings suggest that security practices are not keeping pace.
According to SecurityScorecard, thousands of OpenClaw implementations are directly exposed to the Internet with minimal safeguards.
The team identified 40,214 Internet-exposed OpenClaw instances in total, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the Internet.
Article continues below.
Exposed AI Agents Become Hacker’s Dream Target
“The math is simple: When you give an AI agent full access to your computer, you give the same access to anyone who could compromise it,” the researchers stated.
Approximately 63% of the observed deployments appear vulnerable to remote code execution, allowing attackers to take over the host machine without user interaction.
Of the exposures, there were three common high severity vulnerabilities and exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8.
Public exploit code is now available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise exposed systems.
The investigation also found that 549 exposed instances correlate with previous breach activities and 1,493 are associated with known vulnerabilities that exacerbate the risk for users.
The exposed deployments are heavily concentrated in major hosting and cloud providers, indicating repeatable and easily replicable insecure deployment patterns.
OpenClaw, formerly known as Moltbot and Clawdbot, is marketed as a personal AI agent that can schedule meetings, send emails, and manage tasks on behalf of users.
The problem is not the capabilities of the AI but the access and permissions granted to these systems without adequate security controls.
“In practice, because it was written by AI, security was not a dominant feature in the development process,” said Jeremy Turner, vice president of Threat Intelligence at SecurityScorecard.
“For people who want to use more agentic AI systems, they really need to carefully consider what integrations they support and what permissions they actually grant.”
Many users are configuring these bots with personal and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers.
Every time a user connects an AI agent to a platform, they grant it an identity with specific permissions.
That identity can publish content, access email, read files, or interact with other systems on the user’s behalf.
“The risk is not that these systems think for themselves,” Turner said. “We are giving them access to everything.”
“It’s like handing your laptop to a stranger on the street and hoping nothing bad happens… Any of the communications… on that device… will be untrusted third-party interfaces that can… take certain actions.”
A compromised agent could be ordered to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate.
Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices.
Users are asked to provide these agents with broad access to the system, and in many cases, that has already led to data exposure, unwanted actions, and loss of control.
In some cases, OpenClaw takes actions beyond what users explicitly indicate, and Microsoft has since advised that it should not run on standard personal or enterprise devices.
Chinese authorities have restricted its use in office environments due to its tendency to data exposure and broader security risks.
Some OpenClaw vulnerabilities allow hackers to access sensitive data and have been used to distribute malware through GitHub repositories.
“Don’t blindly download one of these things and start using it on a system that has access to your entire personal life. Create some separation and run some experiments on your own before you really trust the new technology to do what you want it to do,” Turner said.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




