- HBM4 chips set to power Tesla’s advanced AI ambitions
- Dojo supercomputer to integrate Tesla’s high-performance HBM4 chips
- Samsung and SK Hynix compete for Tesla AI memory chip orders
As the high-bandwidth memory (HBM) market continues to grow, and is projected to reach $33 billion by 2027, competition between Samsung and SK Hynix is intensifying.
Tesla is fanning the flames as it reportedly approached Samsung and SK Hynix, two of South Korea’s largest memory chip makers, for samples of its next-generation HBM4 chips.
Now, a report from Korean economic daily claims that Tesla plans to evaluate these samples for possible integration into its custom Dojo supercomputer, a critical system designed to power the company’s artificial intelligence ambitions, including its autonomous vehicle technology.
Tesla’s ambitious plans in AI and HBM4
The Dojo supercomputer, powered by Tesla’s proprietary D1 AI chip, helps train the neural networks necessary for its full self-driving (FSD) function. This latest request suggests that Tesla is preparing to replace the older HBM2e chips with the more advanced HBM4, which offers significant improvements in speed, energy efficiency, and overall performance. The company is also expected to incorporate HBM4 chips in its artificial intelligence data centers and future autonomous vehicles.
Samsung and SK Hynix, long-time rivals in the memory chip market, are preparing prototype HBM4 chips for Tesla. These companies are also aggressively developing custom HBM4 solutions for major US technology companies such as Microsoft, Meta and Google.
According to industry sources, SK Hynix remains the current leader in the high-bandwidth memory (HBM) market, supplying HBM3e chips to NVIDIA and maintaining a significant market share. However, Samsung is quickly closing the gap, forming partnerships with companies like Taiwan Semiconductor Manufacturing Company (TSMC) to produce key components for its HBM4 chips.
SK Hynix seems to have moved forward with its HBM4 chip. The company claims that its solution offers 1.4 times the bandwidth of the HBM3e and consumes 30% less power. With bandwidth expected to exceed 1.65 terabytes per second (TB/s) and low power consumption, HBM4 chips deliver the performance and efficiency needed to train massive AI models using Tesla’s Dojo supercomputer. .
The new HBM4 chips are also expected to include a logic chip at the base of the chip stack, which functions as a control unit for the memory chips. This logic array design enables faster data processing and better power efficiency, making HBM4 ideal for Tesla’s AI-driven applications.
Both companies are expected to accelerate their HBM4 development timelines, with SK Hynix aiming to deliver the chips to customers by the end of 2025. Samsung, on the other hand, is boosting its production plans with its advanced 4-nanometer foundry process ( nm), which could help you gain a competitive advantage in the global HBM market.
Through trend force