OpenAI and Amazon signed a $38 billion deal that will enable OpenAI to run its artificial intelligence systems using Amazon’s data centres across the United States. This significant deal allows OpenAI to harness hundreds of thousands of Nvidia’s specialized AI chips via Amazon Web Services (AWS), ensuring it has the computing power required to support its AI tools, including ChatGPT.
Immediate Utilization and Expansion Plans
According to Amazon, OpenAI will begin utilizing AWS computing capacity immediately, with the goal of fully deploying the capacity by the end of 2026. The partnership also includes plans to expand the infrastructure further into 2027 and beyond. This is a direct response to the rapid growth and high energy demands inherent in developing and maintaining advanced AI technologies.
OpenAI’s Infrastructure Investment and Revenue Outlook
OpenAI has incurred over $1 trillion in financial commitments to build AI infrastructure, involving collaborations with multiple partners such as Oracle, SoftBank, and key semiconductor manufacturers like Nvidia, AMD, and Broadcom. Despite concerns over its ability to generate profit to cover these expenses, OpenAI CEO Sam Altman expressed confidence in the company’s revenue growth trajectory, noting steep increases and a forward-looking betting approach on continued growth.
Amazon’s Role in the AI Ecosystem
Amazon is already a major player in AI infrastructure, serving as the primary cloud provider for Anthropic, an AI startup focused on safer chatbots for tasks like summarization and coding.Â
The company has invested $4 billion in Anthropic, which will move most of its operations to AWS and use Amazon’s chips to train future models. This $38 billion deal underscores the massive cloud demands behind AI progress.
It strengthens Amazon’s role as a core provider of AI infrastructure and supports OpenAI’s push to scale and advance its technologies.





