Microsoft has officially launched Maia 200, its second-generation custom AI chip, marking a significant step toward reducing dependency on NVIDIA GPUs.
Maia 200 Specifications
- Performance: 10+ petaFLOPS at FP4
- Memory: 216GB HBM3e
- Process: 3nm manufacturing
- TCO Improvement: >30% reduction vs. current solutions
Multi-Vendor Strategy
CEO Satya Nadella emphasized Microsoft’s multi-vendor fleet approach: NVIDIA + AMD + Maia.
“We don’t want to be locked into any one thing… It’s not a one-generation game. You have to be ahead for all time to come.”
— Satya Nadella
Infrastructure Investment
Microsoft is investing $120B+ in FY26 CapEx, with Maia 200 central to its “tokens per watt per dollar” optimization strategy.
The chip will power Azure AI workloads across Microsoft’s 400+ datacenters in 70 regions worldwide.
Competitive Implications
While Maia 200 represents significant progress, Microsoft acknowledges its custom silicon remains 3-5 years behind Google’s TPUs. However, the vertical integration allows Microsoft to optimize specifically for its enterprise AI workloads.
For a deeper strategic analysis, read Microsoft In The AI Stack on The Business Engineer.









