- Nvidia integrates Samsung Foundry to expand NVLink Fusion for custom AI silicon
- NVLink Fusion allows CPU, GPU and accelerators to communicate seamlessly
- Intel and Fujitsu can now build CPUs that connect directly to Nvidia GPUs
Nvidia is stepping up its efforts to become indispensable in the AI landscape by expanding its NVLink Fusion ecosystem.
Following a recent collaboration with Intel, which allows x86 CPUs to connect directly to Nvidia platforms, the company has hired Samsung Foundry to help design and manufacture custom CPUs and XPUs.
The move, announced during the 2025 Open Compute Project (OCP) Global Summit in San Jose, shows Nvidia’s ambition to extend its control to the entire AI computing hardware stack.
Integrating new players into NVLink Fusion
Ian Buck, vice president of HPC and hyperscale at Nvidia, explained that NVLink Fusion is an IP and chiplet solution designed to seamlessly integrate CPUs, GPUs and accelerators into the MGX and OCP infrastructure.
It enables direct, high-speed communication between processors within rack-scale systems, with the goal of eliminating traditional performance bottlenecks between computing components.
During the summit, Nvidia revealed several ecosystem partners, including Intel and Fujitsu, both now capable of building CPUs that communicate directly with Nvidia GPUs via NVLink Fusion.
Samsung Foundry joins this list, offering complete expertise from design to manufacturing of custom silicon, an addition that strengthens Nvidia’s reach in semiconductor manufacturing.
The collaboration between Nvidia and Samsung reflects a growing shift in the AI hardware market.
As AI workloads expand and competition intensifies, Nvidia’s custom CPU and XPU designs aim to ensure its technologies remain critical for next-generation data centers.
According TechPowerUpNvidia’s strategy comes with strict restrictions.
Custom chips developed under NVLink Fusion must be connected to Nvidia products, and Nvidia retains control over communication drivers, PHY layers, and NVLink Switch licenses.
This gives Nvidia considerable influence in the ecosystem, although it also raises concerns about openness and interoperability.
This increasingly tight integration comes as rivals such as OpenAI, Google, AWS, Meta and Broadcom develop in-house chips to reduce dependence on Nvidia hardware.
Nvidia is weaving itself deeper into the fabric of AI infrastructure by making its technologies inevitable rather than optional.
Through NVLink Fusion and the addition of Samsung Foundry to its custom silicon ecosystem, the company is expanding its influence across the entire hardware stack, from chips to data center architectures.
This reflects a broader trend among its coopetitors. Broadcom is diving deeper into AI with custom accelerators for hyperscalers.
OpenAI is also reported to be designing its own in-house chips to reduce reliance on Nvidia GPUs.
Together, these developments mark a new phase of AI hardware competition, where control over the transition process from silicon to software determines who leads the industry.
Nvidia’s partnership with Samsung appears to be aimed at countering this by accelerating the release of custom solutions that can be quickly deployed at scale.
By incorporating its intellectual property into broader infrastructure designs, Nvidia positions itself as an essential part of modern AI factories, rather than simply a GPU supplier.
Despite Nvidia’s contributions to the OCP open hardware initiative, its NVLink Fusion ecosystem maintains strict limits that favor its architecture.
While this can ensure performance benefits and ecosystem consistency, it could also increase concerns about vendor lock-in.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.