- IA teams still favor Nvidia, but rivals like Google, AMD and Intel are increasing their participation
- The survey reveals that budget limits, energy demands and cloud dependence are shaping AI Hardware decisions
- GPU’s scarcity pushes workloads to the cloud while efficiency and tests continue to overlook
Hardware spending is beginning to evolve as teams weigh the yield, financial considerations and scalability, has affirmed new research.
The last hardware study of Liquid Web surveyed 252 professionals of trained, and found, while Nvidia remains comfortably the most used hardware supplier, its rivals are gaining more and more land.
Almost a third of respondents reported that they used alternatives such as Google Tpus, AMD GPU or Intel Chips for at least some part of their workloads.
The traps of omitting due diligence
The sample size is certainly small, so it does not capture the full scale of global adoption, but the results show a clear change in how the equipment begins to think about the infrastructure.
A single team can implement hundreds of GPU, so even the limited adoption of non -nvidias options can make a big difference in the hardware footprint.
Nvidia is still preferred for more than two thirds (68%) of the equipment surveyed, and many buyers do not rigorously compare alternatives before deciding.
About 28% of respondents admitted omitting structured evaluations and, in some cases, that lack of evidence led to non -coincident infrastructure and configurations with little power.
“Our research shows that omitting due diligence leads to delayed or canceled initiatives, an expensive error in a fast movement industry,” said Ryan Macdonald, CTO of Liquid Web.
Familiarity and past experience are among the strongest drivers of the GPU choice. Forty -three percent of the participants cited these factors, compared to 35% that valued the cost and 37% who attended the performance tests.
Budget limitations also weigh a lot, with 42% of 42% reduction projects and 14% cancel them completely thanks to the shortage or hardware costs.
Hybrid and cloud -based solutions are becoming standard. More than half of the respondents said they use local and cloud systems, and many expect to increase the expense in the clouds as the year progresses.
The dedicated GPU housed are seen by some as a way to avoid the losses of performance that come with shared or fractionalized hardware.
The use of energy is still challenging. While 45% recognized efficiency as important, only 13% actively optimized for it. Many also regretted power, cooling and supply chain setback.
While Nvidia continues to dominate the market, it is clear that competition is closing the gap. The equipment is discovering that the balance cost, efficiency and reliability is almost as important as gross performance by building AI infrastructure.