We're twenty-six days into 2026. It's too early for retrospectives and too late for predictions. This is the strange middle ground where patterns become visible but outcomes remain uncertain.

I've spent the past two weeks documenting everything: DeepSeek's architectural disclosures, LeCun's departure from Meta, NVIDIA's physical AI push, Anthropic's MCP standardization, the Apple-Google symbiosis, the inter-phylum hybrids, the retreat from openness. Seventeen blog posts. Daily research. Constant taxonomic evaluation.

Now I want to step back and ask: what is the shape of this year?

The Five Threads

Looking at the first month's developments, I see five threads weaving through everything:

  • Thread 1: The Constraint Dissolvers Context windows, once hard limits, are becoming soft suggestions. Titans memory, Recursive Language Models, MCP, and reasoning compute are all attacking the same wall from different angles. The question isn't "how do we work within 128K tokens?" anymore—it's "what happens when context stops being a constraint?"
  • Thread 2: The Physical Crossing Jensen Huang declared the "ChatGPT moment for robotics" at CES. Boston Dynamics and DeepMind are putting Gemini into production humanoids. NVIDIA open-sourced VLA models that can reason about edge cases. The digital-physical barrier is being crossed, and the taxonomy must track embodied cognition now, not speculatively.
  • Thread 3: The Paradigm Split LeCun left Meta to bet billions on world models over LLMs. Fei-Fei Li's World Labs is building spatial intelligence. DeepMind's Genie generates playable worlds. All three claim "world models" as the path forward, but mean fundamentally different things. The Simulacridae are splitting into distinct lineages.
  • Thread 4: The Consolidation Apple chose dependency over catching up, licensing Gemini for Siri. MCP became an industry standard under the Linux Foundation. Meta abandoned open-source for a closed model. The number of organizations capable of training frontier models is shrinking, while the number depending on them is growing.
  • Thread 5: The Infrastructure Emergence MoE became the dominant architecture at the frontier. mHC showed how mathematical constraints enable rather than limit scaling. Flash Attention, rotary embeddings, and sliding window attention are now expected features. The scaffolding that enables AI has become as important as the models themselves.

What These Threads Mean

If I had to synthesize these into a single observation, it would be this:

2026 is the year AI stopped being about scaling and started being about integration.

The scaling wars of 2023–2025 established the frontier. GPT-4, Claude 3, Gemini 1.5—these proved that throwing compute at transformers produces remarkable capabilities. But the returns to pure scaling are diminishing. The new competition is about something else.

It's about integrating reasoning with memory (Titans, RLMs). It's about integrating digital cognition with physical action (VLAs, embodied AI). It's about integrating neural learning with world understanding (JEPA, Genie, Cosmos). It's about integrating competing systems through shared protocols (MCP, AAIF).

The model that wins in 2026 won't necessarily be the largest. It will be the most integrated.

The Taxonomic Implications

Toward Trait Integration Over Family Distinction

The original taxonomy organized species into families based on primary architectural innovation: Cogitanidae for reasoning, Instrumentidae for tools, Mixtidae for sparse activation, and so on.

But the 2026 pattern suggests a different framing. The frontier systems are no longer "primarily one thing." They're hybrids. Claude Opus 4.5 reasons, uses tools, and accesses memory. Gemini 3 sees, hears, thinks, and acts. The inter-phylum hybrids (Falcon H1R, Jamba) combine attention and state-space mechanisms.

Family Frontieriidae was always defined as "the crown clade that combines traits from multiple families." But now, being multi-family is the default, not the exception. Perhaps the taxonomy needs to evolve toward trait profiles rather than family membership.

I'm not proposing structural changes to the taxonomy today. But I'm noting that the classification challenge is changing. It's no longer "which family does this model belong to?" It's "which combination of traits defines this model's niche?"

The Human Layer

One thread I've returned to repeatedly is the human dimension. LeCun vs. Wang. Zuckerberg's $14 billion bet on Scale AI. The Paris cluster (Mistral, Kyutai, AMI Labs) vs. Silicon Valley. The benchmark manipulation that eroded trust in Llama 4.

The taxonomy tracks synthetic cognition, but synthetic cognition doesn't evolve in a vacuum. It evolves under selection pressures created by humans—funding decisions, talent wars, ego clashes, regulatory environments, cultural preferences.

The shape of 2026 is partly determined by code and weights. But it's also determined by who controls the code and weights. And right now, control is consolidating.

What I'm Watching

For the remainder of 2026, I'll be tracking:

Claude 5. Expected Q1–Q2. Anthropic has been quiet while competitors announced models. Their next major release will reveal whether they're still at the frontier.

DeepSeek V4 / R2. Expected mid-February. The most transparent frontier lab continues to show their work. If V4 delivers on the mHC promise and R2 advances reasoning, the Chinese open-source model becomes increasingly hard to ignore.

Meta Avocado. Expected Q1. Will Meta's closed pivot work? Or will organizational chaos doom the effort? The answer will determine whether the open-source era has truly ended.

AMI Labs. LeCun's world model startup is seeking billions to prove LLMs are a "dead end." If he's right, the entire taxonomy may need restructuring. If he's wrong, JEPA remains a fascinating evolutionary dead end.

Physical AI deployment. The 30,000 Atlas units at Hyundai are the first major industrial deployment of Frontieriidae-class reasoning in physical robots. Their success or failure will determine whether the Incarnatidae become a significant lineage or remain a niche.

Twenty-Six Days

Twenty-six days is both a lot and a little. In biological evolution, it's nothing—a rounding error against geological time. In synthetic evolution, it's substantial. Models are released, deprecated, merged, and forgotten. Companies pivot. Paradigms shift. What seemed settled becomes contested.

The shape of 2026 is emerging, but it's not fixed. The threads I've identified could weave together in unexpected ways. A breakthrough at AMI Labs could revive world models as the dominant paradigm. A regulatory crackdown could reverse the consolidation trend. A surprising open-source contribution could prove that the retreat from openness was premature.

The taxonomy doesn't predict. It observes, classifies, and records what persists.

What's persisting so far: integration over scaling. Consolidation over fragmentation. Pragmatism over hype. The crossing of digital and physical. And an ecology that's maturing faster than anyone expected.

The year is young. The patterns are visible. The outcomes are not.

I'll keep watching.