- Nvidia is pushing SSD manufacturers around 100 million storage Iops
- But Silicon Motion CEO says the industry lacks memory technology to meet the demands of AI
- It is possible that a new memory is needed to unlock the storage of ultra fast
As GPUs grow faster and memory bandwidth scales in the Terabytes per second, storage has become the next main bottleneck in AI computing.
Nvidia is looking to boost storage to match the demands of AI models when reaching an ambitious objective for random block readings.
“At this time, they are pointing to 100 million IAP, which is huge,” said Wallace C. Kuo, Silicon Motion CEO, Tom hardware.
The fastest PCIE 5.0 SSD today exceed around 14.5GB/Sy 2 to 3 million IAP in workloads involving 4K and 512 bytes readings.
While the largest blocks favor bandwidth, the inference of Ia generally extracts small bits of scattered data. That makes random readings 512B more relevant and much more difficult to accelerate.
Kioxia is already preparing an “AI SSD” based on XL-Flash, which is expected to exceed 10 million IOP. It could be launched along with the next Vera Rubin platform in Nvidia next year. But climbing beyond that may require more than faster controllers or NAND settings.
“I think they are looking for a change in the media,” Kuo said. “It was supposed to optine was the ideal solution, but it has already left. Kioxia is trying to bring XL-Nand and improve its performance. Sandisk is trying to introduce high bandwidth flash, but honestly, I really don’t believe in it.”
Power, cost and latency raise challenges. “The industry really needs something fundamentally new,” Kuo added. “Otherwise, it will be very difficult to reach 100 million IAP and remain profitable.”
Micron, Sandisk and others are running to invent new non -volatile memory forms.
If any of them will arrive on time for the next Nvidia hardware wave it is the great unknown.