
Amazon is attempting something only Google has tried before: full vertical integration across the entire AI stack. From custom silicon at the bottom to consumer applications at the top, Amazon wants to own—or at least control—every layer where value accrues in the AI economy.
The Q4 2025 earnings reveal how far this integration has progressed. Custom silicon hit $10 billion+ in annual revenue. Foundation models span proprietary (Nova), partnership (Claude), and marketplace (20+ models on Bedrock). Agent infrastructure reached production maturity. Enterprise applications crossed the billion-dollar threshold. Consumer distribution touches 300 million users monthly.
But vertical integration is not vertical excellence. Amazon leads at some layers, competes adequately at others, and lags significantly at a few. Understanding where Amazon is strong, where it’s sufficient, and where it’s vulnerable reveals the real strategic picture.
The Six-Layer AI Stack
Amazon’s AI strategy spans six distinct layers, each with different competitive dynamics:
Layer 1: Silicon and Compute Infrastructure (★★★★☆)
What It Is: The physical foundation—chips designed for AI workloads, data centers housing them, and the power infrastructure keeping them running.
Trainium and Graviton now generate over $10 billion in annual revenue, growing at triple-digit percentages. This isn’t a research project—it’s a scaled business larger than most enterprise software companies.
The roadmap demonstrates sustained commitment. Trainium2 has 1.4 million chips deployed, powering the majority of inference on Bedrock. Project Rainier clusters 500,000+ Trainium2 chips into the world’s largest AI training cluster. Graviton5 delivers 40% better price-performance than x86 alternatives, with adoption across 90%+ of AWS’s top 1,000 customers.
The $200 billion 2026 CapEx guidance—up 56% from 2025’s already record-breaking spend—flows primarily into this layer.
Verdict: Strong position, approaching leadership. Amazon trails Google’s TPU in research prestige but leads in commercial deployment and customer adoption.
Layer 2: Foundation Models (★★★☆☆)
What It Is: The large language models and multimodal systems that provide core intelligence.
Amazon pursues a multi-model strategy rather than betting everything on proprietary development. Bedrock now serves 100,000+ companies with access to 20+ models: Amazon’s Nova family, Anthropic’s Claude, and now OpenAI’s GPT-4/5.
The Nova family expanded significantly—Nova 2 Lite and Nova 2 Pro target frontier intelligence at competitive cost. But Nova doesn’t lead on benchmarks. When enterprises need the most advanced reasoning, they choose Claude or GPT-4, which run on Amazon’s infrastructure but belong to other companies.
Verdict: Adequate through partnerships, but proprietary models lag the frontier. The risk: Amazon becomes “best place to run someone else’s brain.”
Layer 3: Agent Infrastructure and Orchestration (★★★★★)
What It Is: The control plane that governs how agents operate—policy enforcement, memory management, evaluation frameworks, and orchestration.
AgentCore represents Amazon’s most differentiated offering. It answers the enterprise question that matters most: how do I deploy autonomous agents without losing control?
The components work together as an integrated governance system: Cedar language for fine-grained permissions, real-time evaluation monitoring, persistent memory across sessions, and framework-agnostic deployment supporting LangChain, CrewAI, AutoGen, and more.
Verdict: Clear leadership in enterprise agent governance. No competitor matches AgentCore’s depth.
Layer 4: AI-Powered Tools and Agents (★★★★☆)
What It Is: Specific agents and AI-powered tools that perform defined tasks.
Amazon launched “frontier agents”—a new class designed for autonomous, extended operation. Kiro handles software development, the Security Agent embeds throughout the development lifecycle, the DevOps Agent handles incident response, and AWS Transform has analyzed 1.8 billion lines of mainframe code.
Verdict: Strong breadth, but trails Microsoft in developer ecosystem. GitHub Copilot’s installed base and VS Code integration create a distribution edge.
Layer 5: Enterprise Applications (★★★☆☆)
What It Is: Complete applications that solve business problems.
Amazon Connect reached $1 billion in annualized revenue, growing over 30%, handling 20 million daily interactions. It proves the unit economics of AI labor substitution at enterprise scale.
Verdict: Proven in contact centers, but lacks breadth. Microsoft’s M365 Copilot embeds AI into applications where hundreds of millions of knowledge workers spend their days.
Layer 6: Consumer Distribution and Data (★★★★☆)
What It Is: Consumer-facing products that generate usage data and distribution for AI capabilities.
Rufus reached 300 million+ users and drove nearly $12 billion in incremental annualized sales. The commerce data moat—purchase intent, price sensitivity, conversion patterns—has no equivalent. Alexa+ extends AI presence into the home with 500 million+ devices.
Verdict: Unique commerce moat, but narrower reach than Google’s 3 billion Android devices.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









