
- New partnership gives OpenAI access to hundreds of thousands of Nvidia GPUs on AWS
- AWS to bundle GB200 and GB300 GPUs for low-latency AI performance
- OpenAI can expand its computing use until 2027 under this agreement
The AI industry is advancing faster than any other technology in history and its demand for computing power is immense.
To meet this demand, OpenAI and Amazon Web Services (AWS) have entered into a multi-year partnership that could reshape the way AI tools are built and deployed.
The collaboration, valued at $38 billion, gives OpenAI access to AWS’s vast infrastructure to run and scale its most advanced AI workloads.
Building a foundation for massive computing power
The deal gives OpenAI immediate access to AWS compute systems powered by Nvidia GPUs and Amazon EC2 UltraServers.
These systems are designed to deliver high performance and low latency for demanding AI operations, including ChatGPT model training and inference.
“Scaling frontier AI requires massive, reliable computing,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad computing ecosystem that will power this next era and bring advanced AI to everyone.”
AWS says the new architecture will group GPUs like GB200 and GB300 within interconnected systems to ensure seamless processing efficiency across workloads.
The infrastructure is expected to be fully deployed before the end of 2026, with room to expand further until 2027.
“As OpenAI continues to push the boundaries of what’s possible, AWS’ best-in-class infrastructure will serve as the backbone for its AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimized computing shows why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”
AWS infrastructure, already known for its scalability in cloud hosting and web hosting, is expected to play a central role in the success of the partnership.
Data centers handling OpenAI workloads will use tightly connected clusters capable of managing hundreds of thousands of processing units.
Everyday users will soon notice faster, more responsive AI tools, powered by a more robust infrastructure behind ChatGPT and similar services.
Developers and businesses could gain easier and more direct access to OpenAI models through AWS, making it easier to integrate AI into applications and data systems.
However, the ability to scale this to tens of millions of CPUs raises both technical possibilities and logistical questions about cost, sustainability and long-term efficiency.
This rapid expansion of computing resources could lead to higher energy usage and higher maintenance costs for such vast systems.
Additionally, concentrating AI development on major cloud providers could increase concerns about dependency, control, and reduced competition.
OpenAI and AWS have been working together for some time. Earlier this year, OpenAI made its base models available through Amazon Bedrock, allowing AWS users to integrate them into their existing systems.
The availability of these models on a major cloud hosting platform meant that more developers could experiment with generative AI tools for data analysis, coding, and automation.
Companies like Peloton, Thomson PakGazette, and Verana Health are already using OpenAI models within the AWS environment to improve their business workflows.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



