The blockchain trilemma reared its head once again at the Consensus in Hong Kong in February, to some extent putting Charles Hoskinson, the founder of Cardano, at a disadvantage, having to assure attendees that hyperscalers like Google Cloud and Microsoft Azure are No a risk for decentralization.
It was noted that large blockchain projects need hyperscalers, and that one should not worry about a single point of failure because:
- Advanced Cryptography Neutralizes Risk
- Multi-party computing distributes key material
- Confidential computing protects data in use
The argument was based on the idea that “if the cloud can’t see the data, the cloud can’t control the system,” and was left there due to time constraints.
But there is an alternative to Hoskinson’s argument for hyperscalers that deserves more attention.
MPC and confidential computing reduce exposure
This was something of a strategic bastion in Charles’s argument: that technologies such as multi-party computing (MPC) and confidential computing ensured that hardware vendors did not have access to the underlying data.
They are powerful tools. but they No dissolve the underlying risk.
MPC distributes key material among multiple parties so that no participant can piece together a secret. That significantly reduces the risk of a single node being compromised. However, the security surface expands in other directions. The coordination layer, communication channels and governance of participating nodes become critical.
Instead of relying on a single key holder, the system now relies on a distributed set of actors behaving correctly and the protocol being implemented correctly. The single point of failure doesn’t go away. In fact, it simply becomes a distributed trust surface.
Confidential computing, particularly trusted execution environments, introduces a different trade-off. Data is encrypted at runtime, limiting exposure to the hosting provider.
But trusted execution environments (TEEs) are based on hardware assumptions. They depend on microarchitecture isolation, firmware integrity, and correct implementation. Academic literature, for example here and here, has repeatedly shown that architectural and side-channel vulnerabilities continue to emerge in enclave technologies. The security boundary is narrower than that of the traditional cloud, but it is not absolute.
More importantly, both MPC and TEE often operate on top of hyperscaler infrastructure. The physical hardware, virtualization layer, and supply chain remain focused. If an infrastructure provider controls access to machines, bandwidth, or geographic regions, it retains operational influence. Cryptography may prevent data inspection, but it does not prevent performance restrictions, shutdowns, or policy interventions.
Advanced cryptographic tools make targeted attacks more difficult, but still do not eliminate the risk of failure at the infrastructure level. They simply replace a visible concentration with a more complex one.
The argument that “no L1 can handle global computation”
Hoskinson noted that hyperscalers are necessary because no Layer 1 can handle the computational demands of global systems, referencing the trillions of dollars that have helped build such data centers.
Of course, Layer 1 networks were not built to run AI training cycles, high-frequency trading engines, or business analytics pipelines. They exist to maintain consensus, verify state transitions, and provide long-lasting data availability.
You are right about what Layer 1 is for. But global systems mainly need results that anyone can verify, even if the calculation is done somewhere else.
In modern cryptographic infrastructure, heavy computing is increasingly done off-chain. What matters is that the results can be tested and verified on-chain. This is the basis of rollups, zero-knowledge systems, and verifiable computer networks.
Focusing on whether an L1 can execute global computation misses the central question of who controls the execution and the storage infrastructure behind the verification.
If the computation is performed off-chain but relies on a centralized infrastructure, the system inherits centralized failure modes. Colonization remains decentralized in theory, but the path to producing valid state transitions is concentrated in practice.
The issue should be dependency on the infrastructure layer, not computational capacity within Layer 1.
Crypto neutrality is not the same as staking neutrality
Cryptographic neutrality is a powerful idea and something Hoskinson used in his argument. It means that the rules cannot be changed arbitrarily, that hidden backdoors cannot be introduced, and that the protocol remains fair.
But crypto continues hardware.
That physical layer determines who can participate, who can afford it, and who ends up excluded, because performance and latency are ultimately limited by the actual machines and infrastructure they run on. If hardware production, distribution, and hosting remain centralized, participation remains economically limited even when the protocol itself is mathematically neutral.
In high computing systems, the hardware is the game changer. It determines the cost structure, who can scale, and resilience under the pressure of censorship. A neutral protocol running on a concentrated infrastructure is neutral in theory but limited in practice.
The priority should shift towards cryptography combined with diversified hardware property.
Without infrastructure diversity, neutrality becomes brittle under pressure. If a small set of providers can limit workloads, restrict regions, or impose compliance barriers, the system inherits their influence. Fairness in rules alone does not guarantee fairness in participation.
Specialization trumps generalization in computing markets
Competing with AWS is often framed as a question of scale, but this is also misleading.
Hyperscalers optimize flexibility. Its infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tools, and elasticity guarantees—these features are strengths for general-purpose computing, but they are also cost layers.
Zero-knowledge proof and verifiable computing are deterministic, compute-dense, memory-bandwidth-limited, and pipeline-sensitive. In other words, they reward specialization.
A purpose-built testnet competes on test-per-dollar, test-per-watt, and test-per-latency. When hardware, test software, circuit design, and aggregation logic are vertically integrated, efficiency increases. Removing unnecessary abstraction layers reduces overhead. Sustained performance on persistent clusters outperforms elastic scaling for narrow, constant workloads.
In computing markets, specialization consistently outperforms generalization for constant, high-volume tasks. AWS optimizes for optionality. A dedicated testnet optimizes a class of work.
The economic structure also differs. Pricing hyperscalers for enterprise margins and wide demand variability. A network aligned around protocol incentives can amortize hardware differently and tune performance around sustained utilization rather than short-term rental models.
The competition focuses on structural efficiency for a defined workload.
Use hyperscalers, but don’t depend on them
Hyperscalers are not the enemy. They are efficient, reliable and globally distributed infrastructure providers. The problem is dependency.
A resilient architecture uses major vendors for burst capacity, geographic redundancy, and edge distribution, but does not anchor core functions to a single vendor or a small group of vendors.
Settlement, final verification, and availability of critical artifacts must remain intact even if a cloud region fails, a vendor exits a market, or political restrictions tighten.
This is where decentralized storage and computing infrastructure become a viable alternative. Test artifacts, historical records, and verification inputs should not be removable at the provider’s discretion. Instead, they should live off an infrastructure that is economically aligned with the protocol and structurally difficult to disable.
Hypescalers should be used as a optional accelerator rather than something fundamental to the product. The cloud can still be useful for scopes and bursts, but the system’s ability to produce evidence and persist what verification depends on is not controlled by a single vendor.
In such a system, if a hyperscaler disappears tomorrow, the network would only slow down, because the most important parts are owned and operated by a broader network rather than being rented from a choke point of a big brand.
This is how the spirit of decentralization of cryptocurrencies is strengthened.




