- OpenAI adds Google Tpus to reduce the dependence of the NVIDIA GPUs
- The adoption of TPU highlights Openai’s impulse to diversify computing options
- Google Cloud wins OpenAi as a client despite competitive dynamics
According to the reports, Operai has begun to use the Google tensioner processing units (TPU) to light the chatgpt and other products.
A report from PakGazettewhich cites a source familiar with the movement, points out that this is the first important change of Hardware Nvidia, which until now has formed the backbone of the OpenAi computing battery.
Google is renting TPU through its cloud platform, adding OpenAi to a growing list of external clients that includes Apple, Anthropic and safe superintelligence.
Do not abandon Nvidia
While the rented chips are not the most advanced TPU models of Google, the agreement reflects OpenAi’s efforts to reduce inference costs and diversify beyond NVIDIA and Microsoft Azure.
The decision occurs as inference work loads grow together with the use of Chatgpt, which now serves more than 100 million active users daily.
This demand represents a substantial participation of the estimated annual computing budget of $ 40 billion OpenAi.
The “Trillium” V6E of Google are built for a stationary state inference and offer high performance with lower operating costs compared to high -end GPUs.
Although Google declined to comment and OpenAi did not immediately respond to PakGazetteThe agreement suggests a deepening infrastructure options.
Operai continues to depend on Azure backed by Microsoft for most of its implementation (Microsoft is the company’s largest investor in some way), but supply problems and price pressures around GPUs have exposed the risks of relying on a single supplier.
Taking Google to the mixture not only improves Openai’s ability to climb the rating, but also align with a broader trend of the industry towards the mixture of hardware sources for flexibility and pricing.
It is not suggested that Openai is considering abandoning Nvidia completely, but the incorporation of Google TPU adds more control over cost and availability.
It remains to be seen the extent to which Openai can integrate this hardware into its stack, especially given the dependence of the long -standing software ecosystem of the CUDA and NVIDIA tools.