OpenAI is adjusting its computing priorities to handle the sharp increase in demand following the launch of GPT-5. In a message to users, CEO Sam Altman said the company’s immediate focus is to ensure paying ChatGPT subscribers receive more total usage than before GPT-5’s release.
In a post on X, Altman outlined a phased plan to manage capacity. The second priority will be fulfilling existing commitments to API customers, with current infrastructure able to support about 30% more API growth from present levels. Once these needs are met, the company will work on improving the free ChatGPT tier, which has faced usage restrictions during peak times. New API customer onboarding will follow afterward.
To address long-term capacity challenges, Altman confirmed that OpenAI is “doubling our compute fleet over the next five months,” describing it as a significant boost intended to ease constraints.
The company’s approach highlights the balance between serving enterprise API clients, subscription users, and the free user base, while scaling infrastructure for a rapidly growing AI product. GPT-5 adoption has surged since launch, pushing OpenAI’s systems to their limits and making the coming months a critical test for the plan’s effectiveness.