“This is one of the largest infrastructure buildouts in the history of technology.” — Goldman Sachs Research, 2024
Hyperscaler CapEx Trajectory
- 2022-24: $477B
- 2025: $371B (+44% YoY)
- 2025-27: $1.15T (Goldman Sachs estimate)
+141% increase: 2025-27 vs 2022-24
2025 CapEx by Company
- Amazon AWS: $125B — Largest single commitment, Trainium 2.5M chips + custom silicon
- Microsoft Azure: $80B — OpenAI partnership, Stargate project, 6 GW+ power
- Google Cloud: $75B — TPU v7 Ironwood, Gemini infrastructure
- Meta: $60-70B — Llama training, 100% AMD for Llama 4, largest AI training build
Combined 2025: $340-350B (Big 4) + Others = $371B Total
What They’re Building
- GPU Clusters: 100K+ chip deployments, GB200 NVL72 racks
- Custom Silicon: TPU, Trainium, Inferentia — reducing NVIDIA dependency
- Power Infrastructure: Nuclear, solar, grid deals, multi-GW commitments
Why Spend Now Despite Uncertainty?
- Fear of falling behind: AI leadership requires infrastructure today
- Capacity takes years: 2027 capacity must be ordered in 2024-2025
- Winner-take-most: Scale advantages compound in AI
The Infrastructure Gap
- 2025 CapEx: $371B
- 2025 AI Revenue: ~$25B
- ~7% ratio — The gap that must close
Cascade Effect
Hyperscaler CapEx → GPU demand → Supply constraints → Premium pricing → More CapEx required
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









