- Elon Musk Ai plans such equal to 50 million GPU H100 in just five years
- XAI’s training objective is equal to 50 exafultos, but that does not mean 50 million literal gpu
- Achieving 50 Exafultos with H100 would require energy equal to 35 nuclear centrals
Elon Musk has shared a new bold milestone for XAI, which is to implement the equivalent of 50 million H100 class GPU by 2030.
Framed as a measure of AI training performance, the claim refers to calculating the capacity, not the count of literal units.
Even so, even with continuous advances in AI accelerator hardware, this objective implies extraordinary infrastructure commitments, especially in power and capital.
A mass jump in computer scale, with less GPU of what it sounds
In a publication about X, Musk declared: “The goal of XAI is 50 million in H100 computer units of equivalent (but much better energy efficiency) online in 5 years.”
Each GPU NVIDIA H100 AI can deliver about 1,000 TFLOP in FP16 or BF16, common formats for AI training, and reach 50 exafefults using that baseline theoretically would require 50 million H100.
Although the newest architectures such as Blackwell and Rubin dramatically improve performance per chip.
According to performance projections, only about 650,000 GPUs that use future Feynman Ultra architecture may be required to achieve the objective.
The company has already begun to climb aggressively, and its current Colossus 1 cluster works with 200,000 GPU H100 and H200 based on H200, plus 30,000 GB200 chips based on Blackwell.
A new cluster, Colossus 2, is scheduled to connect with more than 1 million GPU units, combining 550,000 GB200 and GB300 nodes.
This places XAI among the fastest adopters of the training technologies of AI writers and vanguard AI models.
The company probably chose the H100 on the newest H200 because the first remains a well -understood reference point in the AI community, widely compared and used in important implementations.
Its consistent performance FP16 and BF16 makes it a clear unit of measure for long -term planning.
But perhaps the most pressing problem is energy. An AI cluster of 50 Exaflops driven by GPU H100 would require 35 GW, sufficient for 35 nuclear centrals.
Even using the most efficient projected GPUs, such as Feynman Ultra, a cluster of 50 Exaflops may require up to 4,685 GW of power.
That is more than triple the power use of the next colossus 2 of Xai. Even with advances in efficiency, the energy supply scale remains a key uncertainty.
In addition, the cost will also be a problem. According to current prices, a single NVIDIA H100 costs more than $ 25,000.
The use of 650,000 GPU of the next generation, instead, could amount to tens of billions of dollars only in hardware, not counting interconnection, cooling, facilities and energy infrastructure.
Ultimately, Musk’s plan for XAI is technically plausible but financial and logistically discouraging.
Via Tomshardware