
Meta’s celebrated open-source philosophy may be ending. The new Avocado model—developed inside the secretive TBD Lab—is reportedly being designed as closed proprietary, offering access only through API and hosted services with no downloadable weights.
Why the Pivot
The strategic logic is clear: DeepSeek successfully cloned Llama architecture, demonstrating the commercial risks of releasing open weights. Chinese competitors repeatedly leveraged Llama to build competing models. Meanwhile, OpenAI, Google, and Anthropic have captured dominant market positions with closed models that monetize more directly.
The Geopolitical Tension
This connects to the geopolitical layer of the AI stack. The 1990s-2000s internet was a U.S.-led globalized play where technology was freely accessible. Today’s AI race is explicitly nationalistic. The U.S. is trying to keep China from accessing American-led AI systems.
Meta using Alibaba’s Qwen for distillation while Zuckerberg argues America must “win against China” reveals the tension between rhetoric and operational reality.
Timeline and Stakes
- Original release: Late 2025 (slipped)
- Current target: Q1 2026
- Performance goal: Competitive with Gemini 3 and GPT-5 upon release
- Architecture: Built from scratch, not iterating on Llama
The stakes are existential for Zuckerberg’s AI narrative. If Avocado disappoints as Llama 4 did, the $100M hires may flee, investors may revolt, and the “Meta as AI laggard” narrative crystallizes.
The Strategic Shift
Avocado marks Meta’s pivot from open-source champion to closed-model competitor. The irony: Zuckerberg warned about Chinese AI being “censored by Beijing” while using Chinese technology to catch up. Open source created vulnerabilities; closed source creates different risks—primarily, competing directly with OpenAI and Google on their terms.
This is part of a comprehensive analysis. Read the full analysis on The Business Engineer.









