- Distributed micro data centers convert unused electricity into functional AI computing
- Network targets 400,000 GPUs installed at 1,000 modular sites worldwide
- Power-first deployment avoids delays caused by slow grid connection approvals
AI infrastructure is hitting a hard limit that has little to do with chips and a lot to do with power. New data centers are often ready to build, but wait years for permission to connect to already strained power grids.
That delay has created interest in building data centers where electricity is available rather than expanding the grid to reach them.
French AI infrastructure company Antimatter is deploying a network of 1,000 modular micro data centers located directly next to energy sources in the US, Europe and GCC regions.
Article continues below.
1GW of capacity secured through grid connection
These smaller facilities use electricity that existing grid connections cannot transport to customers, running AI workloads on site rather than waiting years for new transmission lines to be built.
Each unit fits inside container-style modules that house up to 400 GPUs and can be deployed in approximately five months.
Traditional hyperscale builds typically require more than two years before reaching similar readiness.
Wind, solar, hydroelectric and biogas facilities are prime targets because many already generate electricity that cannot always be delivered to customers when transmission capacity is limited.
Locating data centers next to those sites allows energy that would otherwise be restricted to be used for processing.
Antimatter says it has secured more than 1 GW of capacity through grid connection agreements and reserved locations, and that more than 160 MW are already operating in Texas and Oregon.
Ten units across eight sites form the initial footprint, with hundreds more facilities in development.
The first major construction phase focuses on 100 deployments scheduled for 2027, supporting more than 40,000 GPUs and around 3.6 exaFLOPS of computing capacity.
Longer-term plans extend to 1,000 sites by the end of 2030, delivering more than 400,000 GPUs and approximately 36 exaFLOPS in dozens of countries.
“In the age of AI, intelligence is not the bottleneck, but energy,” said David Gurlé, co-founder, CEO and CEO of Antimatter.
“The infrastructure built for the first era of cloud and AI was designed around centralized scale. But the era of inference requires a different model: more distributed, faster to deploy, and sovereign by design. That’s the infrastructure Antimatter is building.”
Much of the demand comes from inference workloads, where trained models run constantly within co-pilots, automated services, and real-time decision systems.
Smaller distributed facilities linked through shared software allow those systems to function as a network while keeping processing physically closer to users.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.



