Meta is accelerating development of MTIA (Meta Training and Inference Accelerator), now in its third generation, as part of a multi-year strategy to reduce dependency on NVIDIA GPUs.
The NVIDIA Problem
NVIDIA controls ~80% of the AI chip market. Every major AI company — Meta, Google, Microsoft, Amazon, OpenAI, xAI, Anthropic — depends on the same constrained supply.
Meta’s Silicon Strategy
MTIA (3rd Generation)
- Custom inference silicon
- Optimized for Meta’s specific workloads
- Better performance/watt for recommendations
- Multiple generations now in production
Rivos Acquisition (~$1B)
- RISC-V expertise (open architecture)
- ~80 chip engineers added
- Path beyond x86/ARM licensing
The Phased Approach
| Phase | Training | Inference |
|---|---|---|
| Now | NVIDIA GPUs | MTIA + NVIDIA |
| 2025-2027 | NVIDIA GPUs | Mostly MTIA |
| 2028+ | Custom + NVIDIA | Full MTIA |
Strategic Logic
Meta doesn’t need to beat NVIDIA. It needs optionality. Custom silicon for inference reduces dependency enough to negotiate from strength — and ensures survival regardless of GPU allocation politics.
For a deeper strategic analysis, read The Re-Engineering of Meta on The Business Engineer.








