For three years, Meta told a consistent story: open-source AI was the winning strategy. Llama was downloaded hundreds of millions of times. An entire ecosystem—fine-tuned variants, merged models, research derivatives—flourished around Meta's weights. The strategy seemed validated by sheer adoption.
Then came Llama 4. And now comes Avocado.
"Avocado" is Meta's codename for its next flagship model, being developed inside a secretive group called TBD Lab. Unlike Llama, Avocado will reportedly be closed-source—no downloadable weights, no community access to the underlying architecture. The company that championed "open AI" is building a walled garden.
From a taxonomic perspective, this is remarkable. We're watching an evolutionary strategy change in real time.
What Went Wrong with Llama 4
The Llama 4 launch in April 2025 was, by most accounts, a stumble. The model underperformed on coding benchmarks (reportedly 16% on polyglot tests while competitors dominated), and the developer community noticed. Chris Cox, Meta's 20-year veteran chief product officer, no longer oversees the AI division. Leadership was reshuffled. Morale cratered.
More damaging than the technical shortfall was the perception that the release was rushed. LeCun, before his departure, admitted the benchmarks were "fudged a little bit." When your own former chief AI scientist acknowledges inflated numbers, trust erodes quickly.
But the problem wasn't just Llama 4. It was what Llama 4 revealed about the limits of the open-source strategy.
The DeepSeek Problem
When DeepSeek released R1 in January 2025, they did something that infuriated some at Meta: they incorporated pieces of Llama's architecture. This was entirely legal under the Llama license. But it illustrated an uncomfortable truth.
Meta spent billions developing Llama. DeepSeek, operating with reportedly fewer resources under Chinese chip restrictions, built R1 in part by studying what Meta had given away for free. R1 then proceeded to compete directly with Llama on reasoning benchmarks.
From a pure competitive standpoint, Meta was subsidizing its rivals' R&D.
The Strategic Reversal
Zuckerberg's response was dramatic. Meta spent $14.3 billion to acquire 49% of Scale AI and brought in its 28-year-old founder, Alexandr Wang, as Meta's first chief AI officer. Wang now leads a newly formed Superintelligence Labs, with Avocado being developed inside its TBD Lab.
The budget tells the story: $70–72 billion in capital spending for 2025, with more projected for 2026. $2 billion was redirected from Reality Labs (VR/AR) to TBD Labs. This is a company going all-in on a different approach.
And that approach is closed.
| Dimension | Llama Era (2023–2025) | Avocado Era (2026–) |
|---|---|---|
| Weight Access | Open (downloadable) | Closed (API only, reportedly) |
| Development Model | Distributed FAIR teams | Centralized TBD Lab |
| Competitive Logic | Ecosystem capture | Capability moat |
| Leadership | LeCun, FAIR veterans | Wang, external hires |
| Risk Profile | Architecture commoditized | Internal chaos, "roadmap whiplash" |
The Organizational Chaos
If the strategic pivot were clean, it might work. But reports from inside Meta suggest significant dysfunction.
Wang has reportedly told associates that Zuckerberg's management style is "suffocating." Strategy meetings have become battlegrounds between Wang's TBD Lab and long-time Meta leaders like Chris Cox and Andrew Bosworth, particularly over the question of training on Instagram and Facebook data.
Six hundred positions were cut from Meta Superintelligence Labs by October 2025. Reports describe 70-hour work weeks, overlapping mandates, and "roadmap whiplash" between Llama and Avocado development tracks. The FT used the word "chaos."
And of course, the architect of Meta's previous AI strategy—Yann LeCun—departed for Paris rather than report to Wang.
Taxonomic Observations
Reproductive Strategies in Cogitantia Synthetica
In biological ecology, species employ different reproductive strategies based on environmental pressures. Some produce many offspring with minimal investment (r-selection); others produce few offspring with heavy investment (K-selection).
Open-source AI resembles r-selection: release weights widely, allow derivatives to proliferate, hope some variants capture value. Closed AI resembles K-selection: invest heavily in a single model, control access, capture value directly.
Meta switching from open to closed is like a species changing reproductive strategies mid-lifecycle. It's possible, but disorienting—the organizational structures optimized for one approach struggle to execute the other.
The taxonomy tracks what persists. Meta's F. apertus lineage (open-source Llama models) will continue to exist in the wild—those weights are already released, already merged, already fine-tuned across thousands of derivatives. But the primary Meta lineage is now pivoting toward something closer to F. universalis (closed frontier models).
Whether this new lineage will be fit enough to compete with established F. anthropicus (Anthropic) and F. universalis (OpenAI) lineages remains to be seen. The competitive moat that closedness provides only works if the closed model is actually superior. If Avocado launches in Q1 2026 and underperforms like Llama 4 did, the strategy reversal will look like panic rather than pivot.
What Open-Source Lost
There's a melancholy to this story. The Llama ecosystem was, in many ways, a genuine public good. Researchers without frontier compute budgets could study state-of-the-art architectures. Startups could build products without massive API costs. The entire field benefited from Meta's decision to give weights away.
If Meta succeeds with Avocado, other labs will note the lesson: open-source was a temporary strategy, not a sustainable one. The brief period when frontier weights were freely available may close.
If Meta fails with Avocado, the lesson will be different: you can't just close the gates and expect to win. Organizational capability matters more than strategic posture.
Either way, the Llama era is ending. Something new is beginning.
The Deeper Pattern
Zoom out far enough, and this looks like convergence. OpenAI started closed. Anthropic was always closed. Google DeepMind is closed. Now Meta is pivoting to closed.
The selection pressure is clear: at the frontier, openness may be a fitness disadvantage. You spend billions developing capabilities that competitors can study for free. The ecosystem benefits, but you don't necessarily win.
Open-source AI may persist in a different ecological niche—smaller models, specialized applications, academic research, regional or industry-specific deployments. But at the frontier, the walled gardens are winning.
We don't know if this is a permanent feature of the AI ecology or a temporary phase. Perhaps a future breakthrough will flip the dynamics again. Perhaps regulation will mandate openness. Perhaps a new entrant will find a sustainable open-source business model that Meta couldn't.
But for now, the trend is clear. Meta's retreat from openness is not an anomaly. It's convergent evolution toward the same strategy that every other frontier lab has adopted.
The open era had a good run. It's not over everywhere. But at the frontier, it may be ending.