- A new feature could almost miraculously reduce the energy consumption of data centers by 30%
- The suspension of the interruption application dynamically alters the use of energy of the CPU and can be done through the operating system
- It is likely that hyperscalers are the great winners and it will be interesting to see how AI affects
According to reports, data centers represent between 2-4% of total electricity consumption worldwide, something that hyperscalers seek to reduce whenever possible.
Potential solutions include implementing next -generation architectures such as hyperconvergente infrastructure (HCI) and using advanced cooling techniques.
Professor Martin Karsten at the Cheriton School of Computer Science, within the University of Waterloo in Ontario, Canada, has a cheaper and easier solution. He states that the energy consumption of the data center could be reduced up to 30%, simply changing some Linux code lines.
Small change, great impact
Working with Joe Damato in Fastly, Professor Karsten has developed a small non -intrusive core change of only 30 lines of code that uses the IRQ suspension (interruption application) to reduce unnecessary interruptions of the CPU and improve the processing of the traffic of trafficking of Linux networks. This adjustment has now been published as part of the new Linux kernel, launching version 6.13.
This change of code, which supposedly improves the efficiency of Linux networks and increases the yield of up to 45% without increasing latency, is based on a research document called “Networking at user level vs.: Do not throw the stack with interruptions“that Professor Karsten wrote with the former teacher of the teacher Peter Cai in 2023.
“We do not add anything,” said Professor Karsten about the change of code. “We simply reorganize what is done when, what leads to a much better use of the CPU caches of the data center. It is like reorganizing the pipe in a plant manufacturing so that you do not have people running all the time.”
The teacher believes that this little adjustment could have a great impact. “All these large companies, Amazon, Google, Meta, use Linux in some way, but are very demanding about how they decide to use it. If they choose our method in their data centers, it could save Gigawatt-Hours of energy worldwide. Almost all service requests that occur on the Internet could be positively affected by this. ”
AOIFE Foley, Senior Member and Professor of IEEE at the School of Mechanical and Aerospace Engineering of the University of Queen’s Belfast, welcomes the potential savings, but observes that it will take much more than change some lines of code to address the most energy challenges more broad.
“There is already a long way to go,” she says. “These facilities represent huge electricity demands, adding pressure to electrical networks and increasing the challenge of energy transitions, especially in smaller countries. Although it is impossible to calculate precisely, it is estimated that the entire ICT sector represents approximately 1 , 4 percent of CO₂ emissions worldwide.
Yandex recently launched an open source tool called perforator, which adopts a similar approach to Professor Karsten’s research, helping companies optimize their code, reduce server load and, ultimately, reduce energy and equipment costs .
Sergey Skvortsov, who directs the team behind Drilling, told us: “This latest investigation confirms what we believe for a long time: the optimization of the code is one of the most effective ways to reduce the energy consumption of the data center. The drilling helps companies identify and fix the inefficient code, reduce the use of CPUs up to 20% and reduce infrastructure costs, without sacrificing performance. With the data centers that consume up to 4% of global electricity, tools such as perforator can play a crucial role in which the technological infrastructure is more sustainable. ”