- Microsoft’s Maia 200 chip is designed for inference-heavy AI workloads
- The company will continue buying Nvidia and AMD chips despite launching its own hardware
- Supply constraints and high demand make advanced computing a scarce resource
Microsoft has begun deploying its first internally designed AI chip, Maia 200, inside select data centers, a step in its long-term effort to control more of its infrastructure.
Despite this move, Microsoft’s CEO has made it clear that the company has no intention of moving away from third-party chip makers.
Satya Nadella recently stated that Nvidia and AMD will remain part of Microsoft’s acquisition strategy even when the Maia 200 goes into production.
Microsoft’s AI chip is designed to support, not eliminate, third-party options
“We have a great partnership with Nvidia, with AMD. They are innovating. We are innovating,” Nadella said.
“I think a lot of people just talk about who’s ahead. Just remember, you have to be ahead forever. Just because we can integrate vertically doesn’t mean we only integrate vertically.”
Maia 200 is an inference-focused processor that Microsoft describes as built specifically to run large AI models efficiently rather than training them from scratch.
The chip is intended to handle sustained workloads that rely heavily on memory bandwidth, fast RAM access, and rapid data movement between computing units and SSD-backed storage systems.
Microsoft has shared performance comparisons that claim advantages over rival in-house chips from other cloud providers, although independent validation remains limited.
According to Microsoft leadership, its Superintelligence team will receive first access to Maia 200 hardware.
This group, led by Mustafa Suleyman, develops Microsoft’s most advanced internal models.
While Maia 200 will also support OpenAI workloads running on Azure, internal compute demand remains intense.
Suleyman has said publicly that even within Microsoft, access to the latest hardware is treated as a scarce resource. This shortage explains why Microsoft continues to rely on third-party suppliers.
Training and running large-scale models requires enormous compute density, persistent memory performance, and reliable scaling across data centers.
Currently, no chip design satisfies all of these requirements under real-world conditions, and as a result, Microsoft continues to diversify its hardware sources rather than go all-in on a single architecture.
Nvidia’s supply constraints, rising costs and long delivery times have pushed companies toward in-house chip development.
These efforts have not eliminated dependence on external suppliers. Instead, they add another layer to an already complex hardware ecosystem.
AI tools running at scale quickly expose weaknesses, whether in memory management, thermal limits, or interconnect bottlenecks.
Owning part of the hardware roadmap gives Microsoft more flexibility, but it doesn’t eliminate structural limitations that affect the entire industry.
In simple terms, the custom chip was designed to reduce pressure rather than redefine it, especially as computing demand continues to grow faster than supply.
Through TechCrunch
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




