📌 Key Takeaways
- The companies signed a letter of intent to deploy at least 10 GW of systems for OpenAI’s next-gen infrastructure.
- Up to $100B in planned Nvidia investment, released progressively as each gigawatt is deployed.
- First 1 GW targeted for H2 2026 on the Vera Rubin platform.
- OpenAI names Nvidia a preferred strategic compute and networking partner.
- The plan complements existing programs such as Stargate and national data center initiatives.
OpenAI And Nvidia Sign LOI For 10 GW Of AI Systems
OpenAI and Nvidia announced a strategic partnership to build and run at least 10 gigawatts of systems for training and serving the next generation of models. The agreement is framed as the compute backbone for OpenAI’s push toward more capable AI.
The plan pairs capacity and capital. Nvidia intends to invest up to $100 billion in OpenAI, released in stages as each gigawatt comes online, while OpenAI commits to deploying the systems as part of its AI factory roadmap.
“Nvidia and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT.”
— Jensen Huang, Founder and CEO
Together, NVIDIA and OpenAI are expanding the frontier of AI — transforming nearly every industry and unlocking use cases once unimaginable.
“There’s no partner but NVIDIA that can do this at this kind of scale, at this kind of speed,” said @OpenAI CEO Sam Altman. pic.twitter.com/MhGphVsiZF
— NVIDIA Newsroom (@nvidianewsroom) September 22, 2025
Timeline And Platform: Vera Rubin In 2026
The first 1 GW phase is targeted for the second half of 2026, using Nvidia’s Vera Rubin platform. The companies say rollouts will proceed progressively, tied to the readiness of data center sites and power.
OpenAI also highlights a broader buildout that includes sovereign and regional projects under its Stargate program, which already lists multiple country deployments.
What 10 GW Enables For Models And Products
A 10 GW footprint implies millions of GPUs across multiple sites, enough to train larger multi-modal systems and lower latency for global serving. For users, the expected effect is faster iteration on features, more capable agents, and headroom for enterprise workloads.
“Compute infrastructure will be the basis for the economy of the future, and we will utilize what we are building with Nvidia to create new AI breakthroughs and deliver them at scale.”
— Sam Altman, Co-founder and CEO
How The Investment Structure Works
Nvidia’s planned up to $100B commitment is progressive, tied to deployment milestones at each gigawatt. The approach aligns capital with site readiness, power availability, and hardware delivery schedules. Both sides expect to finalize definitive terms after the LOI stage.
This structure also gives OpenAI a clear procurement lane for compute and networking, while preserving flexibility to coordinate with existing partners in cloud, power, and siting.
Where This Fits: AI Factories, Countries, And Energy
OpenAI’s infrastructure program includes Stargate sites in multiple regions, designed to combine compute, power, and connectivity in a repeatable template.
The Nvidia partnership is positioned as a preferred path for the hardware and interconnect layers within that template.
For governments and utilities, the scale signals long-term power agreements, grid planning, and cooling innovations, since capacity is delivered in gigawatt blocks rather than one-off clusters.
“We have utilized Nvidia’s platform to create AI systems that hundreds of millions of people use every day, and we are excited to deploy 10 gigawatts of compute together.”
— Greg Brockman, Co-founder and President
What Builders Should Do Now
- Plan for Vera Rubin era hardware profiles and interconnect assumptions in long-range roadmaps.
- Expect global capacity staging, which can affect where training and inference land by region.
- Align data locality and sovereign requirements with emerging Stargate sites.
- Prepare for agentic workloads that benefit from lower latency and larger context windows.
Conclusion
The OpenAI–Nvidia partnership formalizes a path to 10 GW of capacity, with up to $100B in planned investment that unlocks staged buildouts from 2026 onward.
For the ecosystem, it means more headroom for research and products, and a clearer template for how AI factories will be financed and delivered.
The next signals to watch are site announcements, power contracts, and the first Vera Rubin clusters moving into service.
📈 Latest AI News
23rd September 2025
- Use Nano Banana Free On WhatsApp With Perplexity
- DeepSeek V3.1-Terminus Now Available on Hugging Face
- Llama Joins Approved AI Tools List For Federal Use
- How OpenAI Could Bring ChatGPT To Wearables
- NVIDIA and Abu Dhabi Launch AI and Robotics Lab
For the recent AI News, visit our site.