
The frontier of AI is no longer purely a question of research breakthroughs. It is a contest of scale—capital, compute, and corporate control. The “Bitter Lesson” that Rich Sutton articulated in 2019—that general methods leveraging computation ultimately win—has entered its second phase. Today, the frontier is not just about compute cycles, but also about memory systems and search efficiency. The battlefield is corporate, not academic.
The Dual Requirement
AI competition now rests on two non-negotiable conditions:
- Algorithmic Efficiency – Breakthroughs like DeepSeek’s training efficiency gains can reduce compute requirements by an order of magnitude. Efficiency does not replace infrastructure, but it multiplies it, stretching scarce resources further.
- Infrastructure Access – Even with efficiency gains, competing requires access to 100K+ GPUs as a baseline. No amount of cleverness can eliminate this requirement.
Together, these define the threshold of competition. Startups can innovate on efficiency, but without infrastructure contracts and capital commitments, they remain marginal. DeepSeek itself, despite world-class algorithmic gains, still needed $10B+ in infrastructure to scale. The lesson: you cannot math your way out of physical scarcity.
The Four Horsemen
This reality centralizes power in the hands of the hyperscalers. Four companies—Microsoft, Google, Amazon, and Meta—control the majority of AI’s compute frontier. Their combined infrastructure commitments exceed $320B, an order of magnitude larger than any competitor.
Their dominance is not just capital; it is systemic. They secure:
- GPU Priority – Long-term contracts with Nvidia and power providers.
- Power Access – Exclusive data center sites tied to regional grids.
- Talent Pools – The world’s largest clusters of reinforcement learning researchers.
- RL Expertise – Experience in scaling models continuously rather than as one-off events.
This collective infrastructure control ensures that the four remain both gatekeepers and beneficiaries of the AI boom.
The New Bottleneck: The Memory Wall
If GPUs defined the first wave of scarcity, memory is the next frontier. The so-called memory wall reflects the fact that models processing 128K–1M tokens hit constraints where high-bandwidth memory (HBM) becomes scarcer than GPUs themselves.
Data enters the system easily, but compute capacity trickles through limited memory bandwidth. Nvidia’s fortress is built not just on CUDA software, but on its control over memory architectures that make GPU clusters viable at scale. This is why margins remain at 70–75%, even as competition from AMD, Intel, and TPU initiatives grows.
The real moat is not silicon, but the integration of memory + compute + CUDA into a closed ecosystem that hyperscalers cannot replicate independently.
The RL Scaling Imperative
Training a large model is only the beginning. True competition requires continuous scaling across four dimensions:
- Base Training – Still requires $100M+ one-off compute investments.
- RL Improvement – Ongoing reinforcement learning that costs $10M/month in continuous compute.
- Test-Time Compute – As models “think” longer, search and planning expand with user demand. Costs scale with adoption.
- Memory Systems – Persistent context and long-term memory amplify both costs and capabilities.
The outcome is a perpetual arms race. No company can freeze progress. Every additional user interaction demands more compute, more memory, and more optimization. The RL scaling imperative makes AI less a one-time product and more an ongoing infrastructure commitment.
The New Entry Bar
Historically, competing in AI was thought to require $100B. Algorithmic efficiency breakthroughs have lowered the bar—but only marginally. The new entry point is roughly $10B minimum in infrastructure, still insurmountable for 99% of players.
This means the competitive map shifts:
- Hyperscalers – Continue to dominate through scale and control of compute.
- Tier-2 Entrants – Sovereign funds, national AI champions, and defense-aligned institutions with deep capital pools.
- Startups – Even with algorithmic breakthroughs, they can only influence the frontier when partnered with hyperscalers or aligned with state backing.
The result is a bifurcated market: efficiency innovation at the edges, consolidation of power at the core.
Nvidia’s Fortress
Nvidia remains the fulcrum of this battlefield. With 70–95% share of AI training workloads and margins of 75%, it dominates both supply and profit pools.
Its real moat is not just hardware, but:
- CUDA lock-in – A software ecosystem built over decades.
- Memory systems – Control over HBM integration.
- Scale contracts – Long-term commitments with hyperscalers.
Competitors like AMD, Intel, and even custom silicon from Google (TPU) or Amazon (Trainium) nibble at the edges but cannot escape Nvidia’s chokehold. Until the memory wall is broken by new architectures, Nvidia remains the indispensable supplier.
The Bitter Lesson 2.0
The updated bitter lesson is clear:
- Efficiency matters but does not liberate. Algorithmic breakthroughs amplify infrastructure; they do not replace it.
- Capital barriers persist. Even after efficiency gains, the entry cost is measured in billions.
- Corporate control dominates. The battlefield is not academia but hyperscaler balance sheets.
The cycle is self-reinforcing. Efficiency lowers marginal cost → demand rises → infrastructure expands → new bottlenecks emerge. The arms race never stops.
Strategic Implications
- Oligopoly Control – Four corporations dictate the pace of AI through control of compute. Their interests shape the trajectory as much as research breakthroughs.
- National Leverage – States cannot compete directly without $10B+ infrastructure investments. Expect sovereign AI funds, subsidies, and defense partnerships to escalate.
- Startup Dilemma – Independent challengers face a paradox: algorithmic innovation without access to scale is irrelevant; access to scale without innovation is commoditized.
- Nvidia as Arbiter – As long as the memory wall holds, Nvidia remains the indispensable supplier. Every competitor must pass through its fortress.









