SANTA CLARA, February 26, 2026 — Jensen Huang has outlined a vision for NVIDIA that dwarfs even the company’s current $130.5 billion revenue trajectory: a total installed base of $3-4 trillion in AI compute infrastructure, built out over the coming decade. Three converging demand curves — token economics, agentic AI, and physical AI — form the foundation of what Huang describes as “structurally infinite” demand for compute.
The first curve is token economics. Every AI interaction — from a ChatGPT query to a code completion to an image generation — consumes compute tokens. As AI models grow more capable and are deployed across more applications, total token consumption grows exponentially. Training new models requires massive compute bursts. Running inference at scale requires sustained compute capacity. The result is a demand curve that has no natural ceiling: better models drive more usage, which drives demand for more compute to build even better models.
The second curve is the agentic AI multiplier. When AI shifts from responsive tools (a human asks, the model answers) to autonomous agents (the agent acts independently, calling other agents, querying databases, making decisions), compute consumption multiplies. A single human using ChatGPT might generate 50 queries per day. An autonomous agent performing the same job function might generate 50,000 API calls per day. As enterprises deploy agents across sales, support, engineering, and operations, the multiplicative effect on compute demand is staggering.
The third curve — and the one Jensen Huang has emphasized most in recent presentations — is physical AI. Autonomous vehicles, humanoid robots, industrial automation, and digital twins all require AI models that understand and interact with the physical world. NVIDIA’s Omniverse platform is designed to be the simulation and training environment for these applications. Physical AI represents a market potentially larger than digital AI, because the physical world generates orders of magnitude more data and requires real-time inference at the edge.
A new Business Engineer analysis models the path to Huang’s $3-4 trillion target and finds it may actually be conservative. If data center GPU spending grows at 30% annually (below current rates), agentic AI adds a 2-3x demand multiplier, sovereign nations continue building national AI infrastructure, and physical AI begins scaling in 2028-2029, the math reaches $3 trillion by 2030 without requiring aggressive assumptions.
The critical question for investors isn’t whether $3-4 trillion of AI infrastructure will be built — the demand signals increasingly suggest it will. The question is what share NVIDIA captures, and whether its current 70-80% market share in AI accelerators is sustainable as competition from AMD, custom silicon, and potential architectural shifts intensify. Even a declining share of an exponentially growing market could sustain NVIDIA’s growth for years.
Read the full analysis: NVIDIA & The State of AI on Business Engineer.









