- Microsoft’s Nebius deal gives it 100,000 Nvidia chips without building more infrastructure
- Neocloud providers like CoreWeave and Lambda now power Microsoft’s expanding AI backbone
- Hundreds of thousands of Nvidia GPUs will soon fill Microsoft’s Wisconsin site
Microsoft’s increasing dependence on third-party data center operators has entered a new phase following a $19.4 billion deal with Nebius.
Nebius is one of several “neocloud” providers it has backed with a combined $33 billion investment, and with this deal, Microsoft now has access to more than 100,000 of Nvidia’s newest GB300 chips.
Microsoft has made billions renting computing power to its customers and aims to increase that figure to justify its growing budget for AI data centers.
Compute Leasing to Drive AI Ambitions
The deal is part of Microsoft’s broader effort to boost its AI capabilities and expand the computing power behind its growing ecosystem of AI tools without having to commit all of its own infrastructure.
This shows how Microsoft is managing its massive AI demand by renting capacity to others while reserving its own facilities for paying customers.
The company’s internal data center infrastructure, already one of the largest in the world, has been positioned as a commercial service.
The deal with Nebius gives Microsoft temporary access to Nvidia’s latest GB300 NVL72 server racks, each containing 72 high-end B300 GPUs.
Estimates put the cost of a fully equipped rack at around $3 million, suggesting that Nebius’ share of the deal could exceed $4 billion in hardware alone.
For Microsoft, it’s a shortcut to vast computing resources without having to wait for its next installations to go live.
Microsoft’s partnerships with neocloud providers such as CoreWeave, Nscale, and Lambda show a shift toward distributing AI workloads across smaller, specialized computing networks.
These companies act as middlemen, renting their GPU pools to giants like OpenAI and now Microsoft.
Meanwhile, Microsoft is also investing heavily in its physical footprint.
The company’s upcoming 315-acre data center complex in Mount Pleasant, Wisconsin, is expected to house hundreds of thousands of Nvidia GPUs.
It will also include enough fiber optic cable to “circle the Earth 4.5 times.”
Designed with a self-sustaining energy supply, it signals an attempt to reduce dependence on external suppliers in the long term.
The rapid construction of GPU-powered data centers has already begun to strain local energy systems.
Wholesale energy prices near major AI facilities have reportedly risen 267% in five years, raising concerns among U.S. residents and regulators.
Environmental impacts are also drawing attention, with new projects in Tennessee and Wyoming linked to increased emissions and energy use.
Nvidia’s $100 billion investment in OpenAI has intensified questions about market concentration and antitrust risks.
Microsoft’s deep ties to both Nvidia and OpenAI now put it at the center of the same debate.
This shows how the quest for computational scale continues to blur the lines between association and dominance in the AI ecosystem.
Through Toms Hardware
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.