- El Capitan is a classified property of the US government that processes data related to the US nuclear arsenal.
- Patrick Kennedy of ServeTheHome was invited to the launch at the LLNL in California
- The CEOs of AMD and HPE were also part of the ceremony
In November 2024, the AMD-powered El Capitan officially became the world’s fastest supercomputer, delivering 2.7 exaflops peak performance and 1.7 exaflops sustained performance.
Built by HPE for the National Nuclear Security Administration (NNSA) at Lawrence Livermore National Laboratory (LLNL) to simulate nuclear weapons testing, it is powered by AMD Instinct MI300A APUs and dethroned the previous leader, Frontier, pushing it into second place among most powerful supercomputers in the world.
Patrick Kennedy serve the home He was recently invited to the launch event at LLNL in California, which also included the CEOs of AMD and HPE, and was allowed to bring his phone to capture “some shots before El Capitan arrives on his classified mission.”
It’s not the biggest
During the tour, Kennedy noted, “Each rack has 128 fully liquid-cooled compute blades. “This system was very quiet, with more noise coming from storage and other systems on the floor.”
He then noted, “On the other side of the racks, we have the HPE Slingshot interconnect wired with DACs and optics.”
The El Capitan’s Slingshot interconnect side is, as you’d expect, liquid-cooled, and the switch trays take up only the bottom half of the space. LLNL explained to Kennedy that its codes do not require a full fill, leaving the top half for the “Rabbit,” a liquid-cooled drive that houses 18 NVMe SSDs.
Looking inside the system, Kennedy saw “a CPU that looks like an AMD EPYC 7003 Milan part, which seems about right given the generation of the AMD MI300A. Unlike the APU, the Rabbit’s CPU had DIMMs and what looks like liquid-cooled DDR4 memory. Like standard blades, everything is liquid cooled, so there are no fans in the system.”
While El Capitan is less than half the size of the xAI Colossus cluster in September, when Elon Musk’s supercomputer was equipped with “just” 100,000 Nvidia H100 GPUs (plans are underway to expand it to a million GPUs), Kennedy notes that “systems like this are still huge and done on a fraction of the budget of a 100,000+ GPU system.”