- Meta’s ORW design sets new direction for data center openness
- AMD pushes to open silicon to the rack, although industry neutrality remains uncertain
- Liquid cooling and Ethernet structure highlight the system’s service focus
At the recent Open Compute Project (OCP) 2025 World Summit in San Jose, AMD introduced its “Helios” rack-scale platform, built on Meta’s recently introduced Open Rack Wide (ORW) standard.
The design was described as a double-width open frame that aims to improve the energy efficiency, cooling and serviceability of artificial intelligence systems.
AMD positions “Helios” as an important step toward open, interoperable data center infrastructure, but it remains to be seen to what extent this openness translates into practical industry-wide adoption.
Meta contributed the ORW specification to the OCP community and described it as the foundation for large-scale AI data centers.
The new form factor was developed to address the growing demand for standardized hardware architectures.
AMD’s “Helios” appears to serve as a test case for this concept, combining Meta’s open rack principles with AMD’s own hardware.
This collaboration signals a move away from proprietary systems, although dependence on major players like Meta raises questions about the neutrality and accessibility of the standard.
The “Helios” system is powered by AMD Instinct MI450 GPUs based on the CDNA architecture, along with EPYC CPUs and Pensando networking.
Each MI450 reportedly offers up to 432 GB of high-bandwidth memory and 19.6 TB/s of bandwidth, providing capacity for data-intensive AI tools.
At full scale, a “Helios” rack equipped with 72 of these GPUs is expected to achieve 1.4 exaFLOPS at FP8 and 2.9 exaFLOPS at FP4 precision, backed by 31 TB of HBM4 memory and 1.4 PB/s of total bandwidth.
It also provides up to 260 TB/s of internal interconnect performance and 43 TB/s of Ethernet-based scaling capability.
AMD estimates up to 17.9 times the performance of its previous generation and approximately 50% more memory capacity and bandwidth than Nvidia’s Vera Rubin system.
AMD claims this represents a leap in AI training and inference capability, but these are engineering estimates and not field results.
The open rack design incorporates OCP DC-MHS, UALink and Ultra Ethernet Consortium frameworks, allowing for both scalable and expandable deployment.
It also includes liquid cooling and standards-based Ethernet for added reliability.
The “Helios” design extends the company’s push for openness from the chip level to the rack level, and Oracle’s initial commitment to deploy 50,000 AMD GPUs suggests commercial interest.
However, broad ecosystem support will determine whether “Helios” becomes a shared industry standard or remains another branded interpretation of open infrastructure.
As AMD targets 2026 for volume launch, the extent to which competitors and partners adopt ORW will reveal whether openness in AI hardware can move beyond concept to genuine practice.
“Open collaboration is key to scaling AI efficiently,” said Forrest Norrod, executive vice president and general manager of AMD’s Data Center Solutions Group.
“With ‘Helios’, we are turning open standards into real deployable systems, combining AMD Instinct GPUs, EPYC CPUs and open frameworks to provide the industry with a flexible, high-performance platform designed for the next generation of AI workloads.”
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.