- SK Hynix completes the development of HBM4 and prepares technology for mass production
- Nvidia Rubin AI processors are expected to present the new HBM4 SK Hynix chips
- HBM4 doubles connections and improves efficiency by 40 percent
The Giant of the memory of South Korea, SK Hynix, says that it has completed the development work in HBM4 and is now preparing technology for mass production.
This puts it ahead of the rivals Samsung and Micron, who are still developing their own versions.
The chips are expected to appear in the next -generation Rubin AI AI processors, and will play an important role in the feeding of future artificial intelligence work loads.
Beyond the limitations of AI infrastructure
Joohwan Cho, HBM Development Chief in SK Hynix, said: “HBM4 development will be a new milestone for industry. Providing the product that meets customer needs in performance, energy efficiency and reliability in a timely manner, the company will meet the time to market and maintain a competitive position.”
High bandwidth memory, or HBM, stretch drama layers vertically to accelerate data transfer.
The demand for high bandwidth has increased with the increase in AI workloads, and energy efficiency has become a pressing concern for data centers.
HBM4 double the number of I/O connections compared to the previous generation and improves energy efficiency in more than 40%.
The Korean memory creator says that this translates into higher performance and lower energy use in data centers, with service yield profits of up to 69%.
HBM4 exceeds the standard operating speed of the Jedec industry of 8 Gbps when operating above 10 Gbps.
SK Hynix adopted its advanced MR-MUF process to stack chips, which improves heat dissipation and stability, along with 1BNM process technology to help minimize the risk of manufacturing.
Justin Kim, president and chief of AI Infra in SK Hynix, said: “We are presenting the establishment of the first mass production system of the HBM4 world. HBM4, a symbolic point of change beyond the limitations of AI infrastructure, will be a central product to overcome technological challenges. We will be required in a AI memory provider for the supply of memory of the memory of memory with the memory Memory with memory memory performance with memory and diversity performance to overcome AE ATS performance. “