- HBM is critical to the AI revolution, enabling ultra-fast data transfer close to the GPU.
- Scaling HBM performance is difficult if you stick to JEDEC protocols
- Marvell and others want to develop a custom HBM architecture to accelerate its development
Marvell Technology has introduced a custom HBM computing architecture designed to increase the efficiency and performance of XPUs, a key component in the rapidly evolving cloud infrastructure landscape.
The new architecture, developed in collaboration with memory giants Micron, Samsung and SK Hynix, aims to address limitations in traditional memory integration by offering customized solutions for the needs of next-generation data centers.
The architecture focuses on improving the way XPUs, used in advanced artificial intelligence and cloud computing systems, handle memory. By optimizing the interfaces between AI computing silicon dies and high-bandwidth memory stacks, Marvell claims the technology reduces power consumption by up to 70% compared to standard HBM implementations.
Moving away from JEDEC
Additionally, its redesign is said to reduce silicon space requirements by up to 25%, allowing cloud operators to expand computing capacity or include more memory. This could potentially allow XPUs to support up to 33% more HBM stacks, greatly increasing memory density.
“Major cloud data center operators have scaled with customized infrastructure. “Enhancing Compute and said Storage Group at Marvell.
“We are very grateful to be working with leading memory designers to accelerate this revolution and help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”
HBM plays a central role in XPUs, which use advanced packaging technology to integrate memory and processing power. However, traditional architectures limit scalability and energy efficiency.
Marvell’s new approach modifies HBM’s own stack and its integration, with the goal of delivering better performance with less power and lower costs, key considerations for hyperscalers continually seeking to manage increasing power demands in data centers.
serve the home‘s Patrick Kennedy, who reported the news live from Marvell Analyst Day 2024, noted that cHBM (custom HBM) is not a JEDEC solution and therefore will not be a standard HBM available on the market.
“Moving memory away from JEDEC standards and toward customization for hyperscalers is a monumental move in the industry,” he writes. “This shows that Marvell has big wins in hyperscale XPU, as this type of customization in memory space does not happen on small orders.”
The collaboration with leading memory manufacturers reflects a broader industry trend toward highly customized hardware.
“Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era,” said Raj Narasimhan, senior vice president and general manager, Computing and Business Unit. Micron Networks.
“Strategic collaborations focused on power efficiency, like the one we have with Marvell, will build on Micron’s industry-leading HBM power specifications and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance needed.” to scale AI.”