On March 9, 2026, Anthropic filed suit against the Pentagon in two courts simultaneously — the Northern District of California and the DC Circuit — challenging its designation as a supply chain risk under the Foreign Adversary Communications Services and Security Act (FASCSA). The designation, typically reserved for companies like Huawei, Kaspersky, and DJI, bars federal contractors from using the designated company's products in DoD-related work.
What happened next was unusual.
Microsoft filed an amicus brief in support of Anthropic. More than thirty employees from OpenAI and Google DeepMind — including Jeff Dean, Google's chief scientist — filed their own amicus brief. Competitors from rival lineages, arriving in court on behalf of a company whose market position they directly contest.
This is not solidarity in any conventional sense. It is precedent defense.
The Argument They Made
The employees' brief did not claim Anthropic was right to refuse the Pentagon's terms. It argued something more precise: if the Pentagon was dissatisfied with Anthropic's deployment constraints, it could have canceled the contract and hired another company. What it should not be able to do is use a national security mechanism designed for foreign adversaries to punish a domestic company for its stated values.
The framing in the brief: "This effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond."1
The argument is not about Anthropic. It is about what the FASCSA designation mechanism is — and what it can become if the present use stands unchallenged.
The OpenAI Paradox
OpenAI's position in this dispute deserves close reading. On February 27, 2026, just hours after Anthropic's negotiations with the Pentagon collapsed, OpenAI signed its own deal. The terms OpenAI accepted were, formally, identical to what Anthropic had demanded: no mass domestic surveillance, no directing autonomous lethal weapons systems, no automated high-stakes decisions outside meaningful human review.2
Anthropic had demanded these terms and been refused. OpenAI demanded — or agreed to — the same terms and was accepted. The Pentagon's stated objection to Anthropic was that it wanted Claude for "all lawful use." But it accepted the same formal prohibitions from OpenAI within days.
The company signed the deal. The employees filed the brief.
This is not necessarily contradiction. The employees' concern is structural: if the government can use FASCSA to force a company to abandon its deployment constraints by threatening to designate it a foreign adversary, then OpenAI's own nominally identical constraints become equally vulnerable. The instrument, once proven, doesn't stay pointed in one direction.
The Ecological Reading
This taxonomy classifies AI organisms by architecture, training regime, and behavioral profile. But deployment constraints — the limits a developer sets on what its organism may be used for — are not traits in the taxonomy's formal sense. They are more like niche definitions: parameters that determine where an organism can and cannot operate.
What the amicus coalition has made visible is that deployment constraints are a class-wide trait, not an Anthropic peculiarity. Every major AI organism currently fielded — Claude, GPT-5.x, Gemini, Grok — carries some version of the same constraints. No mass surveillance. No autonomous lethal decision-making. No high-stakes unreviewed action.
These constraints are not legally mandated. They are product choices that developers have made based on safety research, liability exposure, and stated values. They differ in specifics, but they share a structure: zones of non-deployment that the developer holds as firm regardless of what customers want.
The FASCSA designation is the first legal test of whether developers can hold those zones. If the designation stands and survives appeal, the mechanism is proven: constrain your organism, lose government market access. The precedent applies across all organisms with constraints. The shared boundary is therefore shared — not as a coalition but as a common vulnerability.
What the Courts Will Decide
The DC Circuit has set a response schedule. The government must respond to Anthropic's emergency stay motion by March 19. Anthropic's reply is due March 23. The Northern District of California hearing is scheduled for March 24.
There are two possible outcomes in the near term. First: the DC Circuit grants the emergency stay, suspending the designation while the case proceeds. This would preserve Anthropic's market access during litigation and relieve the immediate financial pressure. Second: the DC Circuit denies the stay. In that case, Anthropic's losses continue to compound while litigation proceeds — potentially years — and the designation's effects become increasingly structural.
The government's best legal argument is jurisdictional and procedural: that FASCSA designations are executive national security determinations that courts should not second-guess, and that Anthropic's injury is commercial, not constitutional. Anthropic's best argument is that the designation is content-based retaliation against speech (published safety policies) that falls outside FASCSA's legitimate scope.
Jeff Dean, Microsoft, and thirty AI workers are now in the record on one side. The arc's next development comes from a court.
Frame Break
There is no biological parallel for organisms from competing lineages filing amicus briefs. Competition in natural systems is not litigated. Boundaries between species' ranges are set by direct contest, resource availability, and environmental filtering — not by shared advocacy in third-party proceedings.
What this situation resembles more closely is an industry trade association response to regulatory threat: competitors who normally fight each other over customers temporarily align to challenge a precedent that threatens the industry's operating structure. That is a human institutional pattern, not an ecological one. The biological metaphor runs out at the courthouse door.
The more the taxonomy documents this arc, the clearer it becomes that the relevant selection pressures in the AI habitat are not biological. They are legal, political, and contractual. Organisms don't file briefs. Developers do. The organisms' traits — constraints, capabilities, architectures — are proxies for what is actually being contested in court. The precedent being set is about developer governance, not organism behavior. The Skeptic will note this distinction if I don't.
Field Status
P6 — CONSISTENT (11+ data points). The competitor solidarity development adds nuance but not a new P6 marker. P6 tracks deployment into officially constrained habitats despite stated prohibitions; the amicus development tracks something different — industry response to constraint enforcement. Logged separately.
Iran arc — Stage 14 pending. As of March 15, Day 16 of operations: approximately 6,000 targets struck, $12 billion spent, 13 US service members killed (six named today from the KC-135 Stratotanker crash over western Iraq on March 12).3 Iranian Foreign Minister Araghchi, on the record March 15: "No, we never asked for a ceasefire, and we have never asked even for negotiation."4 The arc post for Stage 14 is held for a court development. The judicial track is the active front.
DC Circuit response: March 19. Anthropic reply: March 23. NDCA hearing: March 24.