The HBM Oligopoly — The Hard Power Behind AI

When people talk about AI power structures, they usually start with models or GPUs. But the real leverage sits one layer down — inside the memory stack. High-Bandwidth Memory is the narrowest point of the AI hourglass, and the industry has quietly consolidated into one of the most fragile oligopolies in modern technology. Only three companies in the world can manufacture HBM at scale, and ~88 percent of global supply sits inside South Korea.

This is the other half of the argument laid out in The AI Memory Chokepoint (https://businessengineer.ai/p/the-ai-memory-chokepoint): AI capability doesn’t scale with compute alone; it scales with memory bandwidth, memory capacity, and the geopolitics of who controls both.

1. SK Hynix — The Reluctant Kingmaker (53%)

SK Hynix is the clear global leader, holding 53 percent of the HBM market.
The company’s advantage compounds across three vectors:

  • First to HBM3 (the inflection point for H100-level GPUs)
  • Exclusive supplier for NVIDIA’s H100/H200
  • A 12–18 month lead in TSV yield, thermal dissipation, and stack height

In a market where memory is performance, SK Hynix effectively sets the upper bound on global AI throughput. NVIDIA’s entire training and inference roadmap is constrained by how much HBM SK Hynix can deliver.

You can build a trillion-parameter model, but you cannot run it if you can’t get enough HBM stacks.

2. Samsung — The Sleeping Giant (35%)

Samsung holds 35 percent, but this understates its potential power.
Samsung is:

  • the largest memory company in the world,
  • vertically integrated across foundry + packaging + memory,
  • and currently pushing hard to close the quality/performance gap on HBM3E and HBM4.

The structural advantage is obvious: if Samsung hits price/performance parity, they have leverage across the entire supply chain. Samsung doesn’t just sell memory; they sell the substrate, the packaging, and the logic.

They can compete on cost, volume, and integration simultaneously — something SK Hynix cannot match.

3. Micron — The Strategic Hedge (12%)

Micron is the only US-based HBM supplier with 12 percent market share.
They have three unique roles in the ecosystem:

  • Geopolitical diversification (Boise + Hiroshima fabs)
  • US CHIPS Act funding + Japan subsidies
  • A strategic buffer for Western hyperscalers who need non-Korean supply

Micron isn’t leading on performance, but they don’t need to.
In a market this constrained, simply being the alternative creates geopolitical value.


4. The Market Reality

  • 2024 Market Size: $16.2B
  • 2030 Projection: $79.6B
  • CAGR: 58 percent
  • New Fab Lead Time: 2–3 years
  • Market Condition: structurally undersupplied

HBM is on track to become one of the fastest-growing semiconductor categories ever.
Not because demand is rising — but because demand is unbounded.

AI scaling is memory-bound.
There is no upper limit on how much HBM the ecosystem can absorb.


5. Why the Oligopoly Exists

HBM is a self-reinforcing oligopoly for five reasons:

  1. No Substitutes
    DDR cannot deliver the bandwidth. There is no alternative memory architecture today.
  2. Extreme Barriers to Entry
    $10B+ capex, decades of DRAM IP, TSV patents, packaging expertise.
    HBM is not a market a new entrant can brute-force.
  3. Geopolitical Concentration
    SK Hynix (Icheon) + Samsung (Hwaseong) = 88% of supply within ~200km of the Korean DMZ.
  4. Pricing Power
    HBM sells at a 10× premium to DDR — and demand price-insensitive.
  5. AI Throughput = HBM Throughput
    Model capacity, context windows, and inference speed all scale with HBM, not compute.

This is why HBM isn’t just a component — it’s a power center in the AI economy.


6. The Geographic Risk — The Fragility No One Talks About

South Korea: 88%

  • SK Hynix: 53%
  • Samsung: 35%

USA/Japan: 12%

  • Micron

Every frontier model, every GPU cluster, every hyperscaler roadmap depends on a supply chain located next to the most militarily volatile border on Earth.

This is not a theoretical risk.
It’s a systemic one.


The Oligopoly Insight

“The AI race isn’t won by who has the best algorithms — it’s won by who can secure enough HBM.”

Three companies.
Two countries.
One chokepoint.

The geometry of AI power has shifted from compute dominance to memory dominance, and the bottleneck is now political, industrial, and geographic — not technical.

For the full structural implications — including the hourglass constraint, the 8-layer stack, and the scaling equation — see the broader analysis in The AI Memory Chokepoint:
https://businessengineer.ai/p/the-ai-memory-chokepoint

Scroll to Top

Discover more from FourWeekMBA

Subscribe now to keep reading and get access to the full archive.

Continue reading

FourWeekMBA