- AMD aims for MI500 launch in 2027 while Nvidia prepares Vera-Rubin a year earlier
- CES 2026 shows growing gap between AMD and Nvidia AI accelerator schedules
- AMD expands AI portfolio as next-gen hardware is a year away
At CES 2026, AMD discussed its short- and long-term AI hardware plans, including a preview of the Instinct MI500 Series accelerators expected to arrive in 2027.
The company used the show to present an early look at Helios, a rack-scale platform built around Instinct MI455X GPUs and EPYC Venice CPUs. Helios is positioned as a model for a very large-scale AI infrastructure rather than a shipping product.
AMD also introduced Instinct MI440X, a new accelerator aimed at on-premise enterprise deployments, designed to fit into eight existing GPU systems for training, tuning and inference workloads.
Nvidia Vera-Rubin also on the way
However, what is most interesting to many industry observers is what comes next. AMD said the Instinct MI500 series is scheduled to launch in 2027 and will offer a significant jump in AI performance compared to the MI300X generation.
The MI500 is expected to use AMD’s CDNA 6 architecture, a 2nm process, and HBM4E memory.
AMD claims the design is on track to deliver up to a 1,000x increase in AI performance over the MI300X, although because it’s still some way off, no detailed benchmarks were shared.
As exciting as it is, the timing is awkward for AMD because Nvidia is preparing to unveil its Vera-Rubin platform this year.
At CES 2026, Nvidia also detailed its replacement for the Grace-Blackwell rack-scale designs: the Vera-Rubin platform is built from six new chips designed to function as a single rack-scale system.
These include the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.
In its NVL72 configuration, the system combines 72 Rubin GPUs and 36 Vera CPUs connected via NVSwitch and NVLink to operate as a shared memory system.
Nvidia says Vera-Rubin NVL72 systems reduce the inference cost per token for expert mixture models by 10x and reduce the number of GPUs needed for training by fourx.
Rubin GPUs use eight HBM4 memory stacks and include a new Transformer Engine with hardware-compatible adaptive compression, which aims to improve efficiency during inference and training without impacting model accuracy.
Rubin-based systems will be available through partners in the second half of 2026, including rack-scale NVL72 systems and smaller HGX NVL8 configurations, with planned deployments at cloud providers, AI infrastructure operators, and system vendors.
By the time AMD’s Instinct MI500 series arrives in 2027, Nvidia’s Vera-Rubin platform is expected to be available through partners and in use at scale.
TechRadar will cover this year’s edition extensively CESand will bring you all the important announcements as they happen. Go to our CES 2026 News page for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable screens to new phones, laptops, smart home devices and the latest in artificial intelligence. You can also ask us a question about the show on our CES 2026 Live Q&A and we will do our best to answer it.
And don’t forget follow us on tiktok and WhatsApp For the latest from the CES fair!




