
- Arm-based Neoverse CPUs can now communicate directly with Nvidia GPUs efficiently
- NVLink Fusion Eliminates PCIe Bottlenecks for AI-Centric Server Deployments
- Hyperscalers like Microsoft and Google can use custom Arm CPUs right away
Nvidia has announced that Arm-based Neoverse CPUs will now be able to integrate with its NVLink Fusion technology.
This integration allows Arm licensees to design processors capable of communicating directly with Nvidia GPUs.
Previously, NVLink connections were primarily limited to Nvidia’s own CPUs or servers using Intel and AMD processors, so hyperscalers like Microsoft, Amazon, and Google can now pair custom Arm CPUs with Nvidia GPUs in their AI workstations and servers.
NVLink Expansion Beyond Proprietary CPUs
The development also allows Arm-based chips to move data more efficiently compared to standard PCIe connections.
Arm confirmed that its custom Neoverse designs will include a protocol that will enable seamless data transfer with Nvidia GPUs.
Arm licensees can create CPU SoCs that natively connect to Nvidia accelerators by integrating NVLink IP directly.
Customers who adopt these CPUs will be able to deploy systems that combine multiple GPUs with a single CPU for AI workloads.
The announcement was made at the Supercomputing ’25 conference and reflects participation from CPU and GPU developers.
Nvidia’s Grace Blackwell platform currently combines multiple GPUs with an Arm-based CPU, while other server configurations rely on Intel or AMD CPUs.
Microsoft, Amazon, and Google are deploying Arm-based CPUs to gain more control over their infrastructure and reduce operating costs.
Arm itself does not make CPUs, but it licenses its instruction set architecture and sells designs for faster development of Arm-based processors.
NVLink Fusion support on Arm chips allows these processors to work with Nvidia GPUs without requiring an Nvidia CPU.
The ecosystem also affects sovereign AI projects, where governments or cloud providers may want Arm CPUs for control plane tasks.
NVLink allows these systems to use Nvidia GPUs while maintaining custom CPU configurations.
Softbank, which previously held shares in Nvidia, backs OpenAI’s Stargate project, which plans to use Arm and Nvidia chips.
Therefore, the NVLink Fusion integration provides options to pair Arm CPUs with market-leading GPU accelerators in multiple environments.
From a technical perspective, the NVLink expansion increases the number of CPUs that can be used in Nvidia-focused AI systems.
It also allows future Arm-based designs to compete directly with Nvidia’s Grace and Vera processors, as well as Intel Xeon CPUs, in configurations where GPUs are the primary computational units.
The development may reduce the attractiveness of alternative interconnects or competing AI accelerators, but chip development cycles could affect the timing of adoption.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



