The Problem: NVIDIA Dependency
NVIDIA (H100/B200) controls ~80% of the market. Everyone depends on them: Meta, Google, Microsoft, Amazon, OpenAI, xAI, Anthropic…
The Risks
- Supply constraints = strategic vulnerability
- Pricing power remains with Nvidia
- Competing for same limited allocation
- Dependency = existential risk in AI race
Meta’s Solution: Build Your Own
MTIA (Meta Training and Inference Accelerator)
- Custom inference silicon
- Optimized for Meta’s specific workloads
- Focus: inference, not training
- Better perf/watt for recommendations
- Multiple generations in production
Training still on Nvidia (for now)
Rivos Acquisition (~$1B reported)
- RISC-V expertise (open architecture)
- ~80 chip engineers
- Custom silicon design capability
- Path beyond x86/ARM licensing
Accelerating the timeline
The Phased Approach
| Phase | Training | Inference | Status |
|---|---|---|---|
| NOW | Nvidia GPUs | MTIA + Nvidia | Still heavily dependent |
| 2025-2027 | Nvidia GPUs | Mostly MTIA | Reducing dependency |
| 2028+ | Custom + Nvidia | Full MTIA | Optionality achieved |
| END STATE | Nvidia = Option, Not Requirement | STRATEGIC FREEDOM | |
The Strategic Logic
Meta doesn’t need to beat Nvidia. It needs optionality. Custom silicon for inference (where Meta’s workloads live) reduces dependency enough to negotiate from strength — and ensures survival regardless of GPU allocation politics.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.








