- Nvidia discarded socmm 1 after repeated failures to meet expectations
- Socamm 2 Promises faster transfer speeds that reach 9,600 TM/s performance
- Adoption discussions LPDDR6 signaling scalability beyond the current socmm 2 modules
Nvidia has abandoned its previous effort to market Socamm 1 after repeated technical problems and now focuses completely on Socamm 2, according to reports.
The first version was positioned as a high -power memory alternative and high capacity for AI servers, but the delays and design setbacks prevented it from winning traction.
Now a source of the industry said Et news (Originally in Korean), “Nvidia originally planned to introduce Socmm 1 within the year, but the technical problems arrested the project twice, avoiding any real -scale real order.”
A change in performance and design objectives
This restart means that Samsung Electronics, SK Hynix and Micron are starting in the same position with the second generation design.
Socamm 2 maintains the removable module form factor with 694 I/O ports, but increases the transfer speed to 9,600 mt/s compared to the previous 8,533 Tm/s.
In practice, this translates into a bandwidth of the system that increases by about 14.3TB/SA approximately 16 TB/s in the Blackwell Ultra GB300 NVL72, a platform already linked to the best GPU discussions in contexts of data centers.
The NVIDIA dependence on LPDDR5X continues for now, although discussions on the adoption of LPDDR6 suggest that the format is being designed with long -term scalability in mind.
Despite these updates, the module still consumes less power than RDIMM based on standard DRAM, an affirmation that will need validation under work loads of the real server.
The first generation of socmm modules was manufactured only by micron, creating a single dependency point that raised questions about the stability of the supply.
Socamm 2 expands the supplier base, with Samsung and SK Hynix preparing samples together with Micron.
This broader participation could make production more stable and prices are more competitive, although it is still uncertain how fast mass production will begin.
Samsung and SK Hynix have indicated that they were “preparing for mass production of Socamm in the third quarter.”
However, industry estimates suggest that Socamm 2 will not be available in volume until the beginning of next year.
One of the most notable differences between the two generations lies in standardization.
Socamm 1 developed outside Jedec, which limited its adoption to Nvidia platforms.
Socamm 2, however, could attract Jedec’s participation, which makes it easier for other companies to adopt similar modules in their systems.
If that happens, Socamm 2 could evolve to a new industry format, offering compact memory options of high bandwidth beyond the Nvidia ecosystem itself.
For creative professionals, such developments could ultimately influence what counts as the best GPU for video edition, especially when memory performance directly affects high -resolution image management.
However, analysts are still cautious, noting that, although this path offers promise, the moment of its arrival, as well as the development of LPDDR6 accelerates, can dilute its long -term impact.
As the computational performance of AI semiconductors improves, memory demand to solve data bottlenecks is increasing.
If Socamm 2 becomes the solution to this problem, or simply one more option in a field full of people, it will depend on the execution, standardization and speed of the LPDDR6 deployment.