Anthropic is testing the most powerful AI model it has ever built, and the world wasn’t supposed to know it yet.
A data breach reported by Fortune on Thursday revealed that the AI lab behind Claude has trained a new model called “Mythos,” which it internally describes as “by far the most powerful AI model we’ve ever developed.”
The model was discovered in a draft blog post left in an unsecured, publicly searchable data cache, along with nearly 3,000 other unpublished assets, according to cybersecurity researchers who reviewed the material.
Anthropic confirmed the model’s existence after Fortune’s investigation, calling it “a step change” in AI performance and “the most capable we’ve built to date.” The company said it is being tested by “early access customers” and acknowledged that “human error” in its content management system caused the leak.
The draft blog post introduced a new model level called “Capybara,” described as larger and more capable than Anthropic’s existing Opus models, which were previously the most powerful.
“Compared to our previous best model, Claude Opus 4.6, Capybara scores dramatically higher in software coding, academic reasoning, and cybersecurity tests, among others,” the draft said.
It is the cybersecurity dimension that matters the most for the crypto industry. The draft blog post said the model “poses unprecedented cybersecurity risks,” a framework that has direct implications for blockchain security, smart contract auditing, and the growing arms race between attackers and defenders in DeFi.
Just this week, Ripple announced an AI-powered security review for the XRP Ledger after an AI-assisted red team discovered more than 10 vulnerabilities in its 13-year-old codebase. Ethereum launched a dedicated post-quantum security center backed by eight years of research.
And the Resolv stablecoin lost its peg after an attacker exploited a minting contract without oracle controls and single-key access control, the type of infrastructure flaw that more capable AI tools could identify before an attacker does, or exploit faster than defenders can respond.
For the AI token market, the leak raises a different question. The Bittensor decentralized network recently launched Covenant-72B, a model that competes with Meta’s Llama 2 70B, sparking a 90% rally in TAO and taking the subnetwork tokens to a combined market cap of $1.47 billion.
A “radical shift” from a centralized lab like Anthropic resets the benchmark that decentralized AI projects must match. The competitive gap between what a well-funded corporate lab can build and what a permissionless network can produce has just widened.
Anthropic said it is “being deliberate” about launching the model given its capabilities. The draft blog noted that the model is expensive to run and is not yet ready for general availability. The company removed public access to the data cache after being contacted by Fortune.
The leak itself is its own warning. A company building what it describes as an AI model with unprecedented cybersecurity capabilities left the announcement of that model in an unsecured, publicly searchable data warehouse due to human error. The irony needs no explanation.




