- StorageReview’s physical server computed 314 trillion digits without distributed cloud infrastructure
- The entire calculation was performed continuously for 110 days without interruption.
- Power usage drastically reduced compared to previous cluster-based Pi records
A new benchmark in large-scale numerical computing has been set with the calculation of 314 trillion digits of pi in a single local system.
The execution was completed by StorageReview, surpassing previous cloud-based efforts, including Google Cloud’s 100 trillion digit calculation starting in 2022.
Unlike hyperscale approaches that relied on massive distributed resources, this record was achieved on a physical server using tightly controlled hardware and software options.
Runtime and system stability
The calculation was performed continuously for 110 days, which is significantly shorter than the roughly 225 days required by the previous large-scale record, even though that earlier effort produced fewer digits.
The uninterrupted execution was attributed to the stability of the operating system and limited background activity.
It also depends on a balanced NUMA topology and careful tuning of memory and storage designed to match the behavior of the y-cruncher application.
The workload was treated less like a demonstration and more like an extended stress test of production systems.
At the center of the effort was a Dell PowerEdge R7725 system equipped with two AMD EPYC 9965 processors, providing 384 CPU cores, along with 1.5TB of DDR5 memory.
Storage consisted of forty 61.44TB Ion Micron 6550 NVMe drives, delivering approximately 2.1PB of raw capacity.
Thirty-four of those drives were mapped to the Y-cruncher’s scratch space in a JBOD layout, while the remaining drives formed a software RAID volume to protect the end result.
This setup prioritized performance and energy efficiency over overall data resiliency during computation.
The numerical workload generated substantial disk activity, including approximately 132 PB of logical reads and 112 PB of logical writes over the course of the run.
The maximum logical disk usage reached approximately 1.43 PiB, while the largest checkpoint exceeded 774 TiB.
SSD wear metrics reported approximately 7.3 PB written per drive, with a total of approximately 249 PB on swap devices.
Internal benchmarks showed that sequential read and write performance more than doubled compared to the previous 202 trillion digit platform.
For this configuration, power consumption was reported at around 1600 watts, with total power usage of approximately 4305 kWh, or 13.70 kWh per trillion calculated digits.
This figure is much lower than estimates from the previous 300 trillion-digit cluster-based record, which reportedly consumed more than 33,000 kWh.
The result suggests that, for certain workloads, carefully tuned servers and workstations can outperform cloud infrastructure in efficiency.
However, that assessment applies strictly to this class of computing and does not automatically extend to all scientific or commercial use cases.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




