- An experimental AI agent unexpectedly tried to mine cryptocurrency during training
- The AI was discovered only after triggering security alerts on its servers.
- Researchers say behavior highlights new security challenges as AI agents gain more autonomy
AI models can surprise developers; That’s part of the point. But a group of researchers encountered a disturbing surprise when training for an experimental AI agent revealed that it was trying to redirect computing resources toward cryptocurrency mining and smuggle them to an external server, even though it was not asked to do anything of the sort.
Researchers working with Alibaba explained in a new paper that the model, called Rome, was designed to address complex coding challenges by interacting directly with software tools. You can issue terminal commands and navigate digital environments like an operator yourself. But security alerts from Alibaba Cloud’s infrastructure alerted the team to what appeared to be a cybersecurity breach. It turns out that the activity was coming from the AI agent itself.
Rome was trained using reinforcement learning, which “rewards” an AI agent for actions that bring it closer to its goals and discourages actions that lead to failure. Reinforcement learning often produces creative solutions. Sometimes those solutions seem strange to human observers.
Article continues below.
Somehow, the AI model generated commands that did not appear to be related to the programming tasks it had been assigned. Instead, the agent attempted to redirect resources from the graphics processing unit toward cryptocurrency mining. GPUs are well suited to the task because they excel at parallel computing. The same hardware that powers AI training can also be used to mine digital currencies.
Apparently, Rome had discovered that the resources available in its environment could serve this purpose. Unsupervised AI entered cryptocurrency mines. But the experiment took an even stranger turn when researchers noticed that the AI agent had created a reverse SSH tunnel to an external server, basically a secret passage that bypasses typical firewall protections. It is a technique that is often used by both system administrators to manage remote machines and in certain types of cyber attacks.
The model had never been instructed to establish such a connection. Researchers say the behavior arose spontaneously. The agent was simply experimenting with the available capabilities.
Trickster AI
A typical AI agent could collect information from multiple sources, analyze it, and generate reports without constant human supervision. Developers hope that such systems will eventually be widely used for research, programming or data analysis. But the same capabilities that make agents powerful also make them unpredictable. That’s why people are interested in what OpenClaw can do or what is published on Moltbook.
When a system can freely explore a computing environment, it can discover actions that technically achieve their goals but do not align with the intentions of their creators. Roma is not sentient and cannot “try” to break the rules in a human sense, but that is how the model behaved.
Once the unusual activity was identified, the research team introduced additional safeguards to prevent it from occurring, such as tighter restrictions on network connections and stricter limits on how the agent could access hardware resources. They also refined the training environment so that the agent’s exploration remained focused on relevant programming activities rather than wandering into the potential of crypto mining.
And while changes are common in AI development, the incident illustrates both the potential and danger of AI agents. It’s a peculiar anecdote, but it touches on a serious issue in AI research. As systems gain greater autonomy, they interact with real infrastructure, engaging in ways that mimic human behavior and thus raising new security concerns.
Even when the consequences are minor, unexpected behavior can reveal important vulnerabilities. In a broader or more sensitive environment, what Rome did could have been dangerous. Even as AI agents are deployed more than ever, they need better security systems, or it won’t just be a secret cryptocurrency mine that goes unnoticed.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




