The origin story of one of the world’s largest AI policy war chests begins with a dog coin, a closet in Canada, and a 78-digit number.
In 2021, the creators of Shiba Inu sent a large amount of SHIB tokens to Vitalik Buterin’s wallet without asking. The idea was simple. Put “Vitalik owns half of our supply” in marketing materials and leverage the partnership to become the next Dogecoin. The tokens quickly increased in value, reaching a book value of over $1 billion.
Buterin wanted to leave. In a post on
He sold what he could for ETH and donated $50 million to GiveWell. But I was still sitting on a mountain of SHIB.
There are often posts that mention that I donated a large amount of funds to @FLI_org years ago and connecting with various political actions they take. I thought I would make clear on the record both the nature of my connection to them and the similarities and differences…
— vitalik.eth (@VitalikButerin) March 13, 2026
He divided the rest in half. Half went to CryptoRelief, which used part to fund medical infrastructure in India and part to support Balvi, Buterin’s own research initiative.
The other half went to the Future of Life Institute, an organization focused on the existential risks of artificial intelligence, biotechnology and nuclear weapons. The FLI had presented him with a roadmap covering all the main risk categories, as well as “pro-peace and pro-epistemic initiatives.”
Buterin expected FLI to withdraw between $10 million and $25 million, given SHIB’s poor liquidity. Instead, they managed to liquidate approximately $500 million. CryptoRelief achieved a similar exit from its half, he said.
A meme coin that no one took seriously had just created a billion-dollar philanthropic event, and half of it went to an organization that would soon change its entire strategy.
That twist is why Buterin posted on Friday. He said the FLI experienced “an internal shift whereby they began to focus on cultural and political action as a primary method, quite different from the original approach.”
FLI’s justification, according to Buterin, is that AGI is advancing rapidly and the organization needs to act aggressively to counter the lobbying budgets of the big AI companies.
“My concern is that large-scale coordinated political action with large pools of money is something that can easily lead to undesirable results, cause negative reactions, and solve problems in an authoritarian and fragile way, even if it was not originally intended that way,” he wrote.
He pointed to FLI’s biosafety approach as an example. The organization’s main strategy has been to incorporate safety barriers into AI models and biosynthesis devices so that they refuse to generate dangerous results.
Buterin called this “very fragile,” noting that leaks, tweaks and other workarounds make such restrictions easy to circumvent. He warned that the logical endpoint of that approach leads to “banning open source AI” and then “supporting a good AI company to establish global dominance and not allowing anyone else to reach the same level.”
“Approaches like this VERY EASILY backfire: they turn the rest of the world into your enemy,” he wrote.
He also pointed out a structural problem with strategies that prioritize regulation. When governments restrict dangerous technology, national security organizations are inevitably exempt, and those same organizations are often a source of risk themselves. He cited government laboratory leak programs as an example.
However, Buterin said he has been “encouraged” by some of the FLI’s recent work, specifically a “pro-human AI statement” that he said “unites conservatives, progressives and libertarians, the United States, Europe and China.” He also noted that the FLI has been investigating ways to prevent the concentration of AI power.
But the central message was clear. A donation Buterin never planned, of tokens he never wanted, funded an organization that strayed from the focus he believed in and is now deploying hundreds of millions of dollars in ways that make him uncomfortable. He shared his concerns with FLI on “several occasions” before going public.
FLI did not immediately respond to CoinDesk’s request for comment on the amount donated and concerns about the safety of the AI.




