- Arm enters silicon production with a CPU designed for large-scale AI workloads
- New AGI CPU doubles rack performance compared to traditional x86 systems
- Meta and OpenAI adopt Arm chip for next-generation infrastructure
Arm has expanded its computing platform to production silicon for the first time with the introduction of what it calls the “next evolution of the Arm computing platform,” the AGI CPU.
The company says the CPU is designed specifically for AI data centers, and supports agent AI workloads that involve continuously running agents capable of reasoning, planning and acting.
The processor features up to 136 Neoverse V3 cores per CPU, with 6GB/s memory bandwidth per core and sub-100ns latency, enabling higher workload density and better system efficiency.
Article continues below.
Performance and capacity
The Arm AGI CPU promises deterministic performance under sustained load with a TDP of 300 watts and one dedicated core per program thread.
The processor supports air-cooled 1U server chassis with up to 8,160 cores per rack and liquid-cooled deployments reaching up to 45,000 cores per rack.
Compared to x86 CPUs, the Arm AGI CPU can provide more than twice the performance per rack, supporting larger AI workloads while remaining power efficient.
These capabilities aim to improve compute density, accelerator utilization, and overall infrastructure efficiency.
Meta acts as lead partner and co-developer of the Arm AGI CPU, integrating it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance.
Early commercial adoption also includes companies such as OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP, and SK Telecom.
Arm is collaborating with OEMs and ODMs such as Lenovo, Supermicro, Quanta Computer, and ASRock Rack to deliver initial systems, with broader availability expected in the second half of 2026.
More than 50 industry leaders across hyperscale, cloud, semiconductor, memory, networking, software and system design sectors support the CPU launch.
“Over the last decade, we’ve partnered closely with Arm to build Graviton here at AWS, and it’s been a remarkable success: most of the compute capacity AWS added to our fleet in 2025 was powered by Graviton,” said James Hamilton, senior vice president and distinguished engineer at Amazon.
“This collaboration has been excellent for both companies and Graviton continues to offer better price/performance ratio to our customers.”
Industry partners also pointed out the broader infrastructure implications of the new CPU.
“The new Arm AGI CPU will further unlock the Arm ecosystem for a wide range of customers, creating new opportunities for everyone…” said Charlie Kawwas, president of Semiconductor Solutions Group, Broadcom Inc.
“As Broadcom builds the world’s most capable hyperscaler XPU and networking solutions…our partnership with Arm has allowed us to move forward with unmatched intent and speed.”
The Arm AGI CPU is intended to serve as the foundation for agent AI workloads, allowing organizations to deploy AI tools at scale while maintaining high efficiency.
The processor supports large-scale deployment of AI applications, including accelerator management, control plane processing, and hosting cloud- or enterprise-based tasks and APIs.
That said, the success of the Arm AGI CPU will depend on data center adoption, integration with existing accelerators and memory, and proven performance improvements over alternatives.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



