Meta is quietly assembling a fully integrated AI stack that mirrors Apple’s approach to mobile computing.
The Six Layers
Layer 1 — Physical Infrastructure: Data centers, power systems, and cooling. The Louisiana Hyperion facility alone will be “roughly the size of lower Manhattan.”
Layer 2 — Custom Silicon: MTIA (Meta Training and Inference Accelerator) chips, now in third generation. The Rivos acquisition brings chip design talent in-house. Potential savings: $10 billion annually versus third-party GPUs.
Layer 3 — Networking Architecture: Disaggregated Scheduled Fabric with 51 Tbps switches. Two 24K H100 clusters built to test different approaches.
Layer 4 — Software Infrastructure: PyTorch (Meta-created), Triton, and custom inference engines optimized for MTIA.
Layer 5 — AI Models: Llama family, with “personal superintelligence” as the stated goal.
Layer 6 — Applications: Meta AI assistant, integrated across all Meta surfaces.
The Numbers Behind the Bet
- $72B: FY2025 Capital Expenditures
- $600B: U.S. Infrastructure Commitment through 2028
- 1.3M+: GPUs in fleet by end of 2024
- 6.6 GW: Nuclear power secured (Vistra, TerraPower, Oklo deals)
- Tens to Hundreds of Gigawatts: Long-term capacity target
This is the largest corporate infrastructure commitment in history. The company that controls the full stack can optimize in ways horizontally integrated competitors cannot match.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









