- The GPU type PCIE card offers 10 pflop fp4 calculation power and 2 GB of SRAM
- The SRAM is usually used in small quantities as cache in the processors (L1 to L3)
- It also uses LPDDR5 instead of HBM memory, much more expensive.
The Silicon Valley Startup D-Matrix, which has the support of Microsoft, has developed a chiplet-based solution designed for rapid inference and small lots of LLM in business environments. Its architecture adopts a computing approach in totally digital memory, using modified SAM cells for faster speed and energy efficiency.
Corsair, the current D-Matrix product, is described as the “first computer platform of its type” and has two ASIC D-Matrix on a PCIE card of height and full length, with four chiplets per ASIC. It reaches a total of 9.6 PFLOP of computer power FP4 with 2 GB of SRAM -based performance memory. Unlike traditional designs that depend on the expensive HBM, Corsair uses LPDDR5 capacity memory, with up to 256 GB per card to handle larger models or inference work loads by lots.
D-Matrix says that Corsair offers interactive performance 10 times better, energy efficiency 3 times higher and costing 3 times higher compared to GPU alternatives, such as the enormously popular NVIDIA H100.
An act of faith
Sree Ganesan, D-Matrix product manager, said Ee times“The current solutions mostly collide with the memory wall with existing architectures. They have to add much more computing and consume much more energy, which is an unsustainable path. Yes, we can do better with more computing flops and greater Memory, but D -Matrix has focused on the bandwidth of memory and innovating in the memory -cramping barrier. “
The D-Matrix approach eliminates the bottleneck by allowing calculation directly inside memory.
“We have built a digital computing core in memory where multiple accumulation in memory occurs and a very high bandwidth can be used: we are talking about 150 terabytes per second,” Ganesan explained. “This, in combination with a series of other innovations, allows us to solve the challenge of the memory wall.”
The executive director, Sid Sheth, said Ee times The company was founded in 2019 after the comments of the hyperscators suggested that the inference was the future. “It was an act of faith, because inference alone as an opportunity was not perceived as too great in 2019,” he said. “Of course, all that changed after 2022 and Chatgpt. We also bet on the transformer. [networks] quite early in the company ”.
Corsair will enter into mass production in the second quarter of 2025 and D-Matrix is already planning its next-generation ASIC, Raptor, which will integrate 3D-stacked DRAM to support reasoning workloads and greater memory capabilities.