OpenAI and Partners Expand Texas Stargate AI Data Center to 5 GW

Expansion Announcement and Strategic Deal
On July 23, 2025, OpenAI confirmed a new 4.5 gigawatt expansion of its Stargate AI infrastructure platform in partnership with Oracle. This latest agreement is reported to be part of a $30 billion per year deal between the two companies, boosting total Stargate capacity under development to over 5 GW. When fully powered, this footprint will consume energy equivalent to the needs of roughly 4.4 million average American homes.
Site Selection: Abilene, Texas
The data center campus is located near Abilene, Texas, a regional hub with established fiber, high-voltage feeders, and proximate to Dyess Air Force Base. Abilene’s three universities supply a skilled workforce, while the existing transmission lines and substations minimize grid upgrades. The choice reflects a broader trend of siting large-scale AI facilities in regions with resilient grids, favorable land, and utility incentives.
Technical Specifications and Infrastructure
- Compute Hardware: Nvidia GB200 GPUs mounted on Oracle Cloud Infrastructure racks, delivering over 1 exaflop of FP16 compute per rack.
- Power Delivery: Dual 345 kV incoming lines with N+1 transformer redundancy, supported by modular Uninterruptible Power Supplies (UPS) and diesel generators for tier-III+ uptime.
- Cooling System: Direct-to-chip liquid cooling for GPU clusters, supplemented by evaporative-cooled chillers and hot-aisle containment.
- Networking: 400 Gbps leaf-spine fabric, with sub-millisecond intra-rack latency, linking to Oracle’s global backbone and peering hubs in Dallas and Phoenix.
Advanced Cooling and Power Management Systems
OpenAI’s data center employs a two-phase liquid cooling loop that extracts heat directly from GPU heat spreaders. A secondary glycol loop transfers waste heat to an air-cooled condenser system. Intelligent power management uses AI-driven load balancing to throttle non-critical workloads during peak grid demand, reducing the carbon footprint and operational cost.
Networking and Latency Considerations
To support large-scale model training and real-time inference, Stargate uses a high-throughput leaf-spine architecture with RoCEv2 (RDMA over Converged Ethernet). This ensures under 500 ns host-to-host latency and near-wire-speed bandwidth. Oracle’s edge cache nodes in major metro areas will further optimize model serving performance for consumer and enterprise applications.
Environmental Impact and Sustainability Efforts
Aligning with OpenAI’s pledge to achieve net-zero emissions by 2030, the Abilene campus integrates on-site solar canopies covering 30 acres and participates in regional wind farm PPAs. Bi-directional EV charging stations offset peak loads, and waste heat is reused to warm adjacent administrative buildings.
Economic and Workforce Implications
OpenAI estimates that the 4.5 GW build-out will create over 2,500 direct construction jobs and 800 full-time operations roles. Indirectly, component manufacturing, logistics, and facility services are projected to sustain an additional 4,000 jobs across Texas.
Financial Backing and Market Confidence
This expansion builds on the $500 billion funding commitment announced at the White House in January. Despite early skepticism from industry critics concerning OpenAI’s capital needs, the involvement of Oracle, SoftBank, CoreWeave, and other investors has solidified confidence in the project’s feasibility.
Expert Perspectives
“Stargate represents a paradigm shift in AI infrastructure scale and efficiency,” says Dr. Anita Kapoor, CTO at DataCenter Dynamics. “The integration of direct liquid cooling and AI-based power management sets a new industry benchmark.”
“Sub-millisecond latency over such a large footprint is a remarkable engineering feat,” notes Raj Patel, lead network architect at CloudOptimize. “It paves the way for distributed training of billion-parameter models in near real time.”
Future Outlook
With site work already under way and initial Nvidia GB200 rack deliveries in progress, OpenAI plans to commence full-scale training activities by Q1 2026. As Stargate enters its next phases, it will support frontier research in large language models, multimodal AI, and autonomous systems—cementing OpenAI’s infrastructure leadership.
Additional Resources
- OpenAI Stargate Whitepaper
- Oracle Cloud Infrastructure AI Solutions Overview
- Nvidia GB200 GPU Architecture Brief