
The gap between proprietary frontier models and open-weight alternatives has collapsed. Open models now reach frontier performance within six months of closed releases — a timeline that seemed impossible just two years ago.
The Inflection Point: DeepSeek R1
The moment everything changed came in early 2025 when DeepSeek R1 dropped — a Chinese lab releasing an open reasoning model matching OpenAI’s o1 capabilities at a fraction of the training cost.
Jensen Huang called it “the first open reasoning model that caught the world by surprise and activated this entire movement.”
The strategic question became clear: If reasoning — the capability that defined the frontier — can be open-sourced, what remains proprietary?
The Numbers Tell the Story
- 80% of AI startups building on open models
- 1-in-4 OpenRouter tokens from open models
- 160M+ monthly downloads on HuggingFace, growing exponentially
- ~100 models released by NVIDIA throughout 2025 — more than any other organization
Where the Moat Shifted
Model benchmarks stopped being a competitive moat. The advantage shifted to three areas:
- Proprietary training data — unique datasets competitors can’t access
- Infrastructure scale — compute and context at enterprise scale
- Product integration — embedded in workflows, not just available
Strategic Implication
Companies still competing on model benchmarks are fighting yesterday’s war. Open models are now the default starting point. For startups, the “wrapper” critique intensified — but so did the opportunity for vertical specialization on top of open foundations.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









