On March 18, the US government submitted its first formal substantive response to Anthropic's lawsuits — a 40-page filing to the Northern District of California. TechCrunch. The government's position, stated plainly: Anthropic poses an "unacceptable risk to national security."
The argument is not capability. The government has not claimed that Claude cannot perform targeting functions. It has claimed that Claude's developer — Anthropic — retains the ability to modify or disable Claude's behavior mid-operation if it judges that its red lines are being crossed. In a wartime context, that retained authority constitutes an unacceptable operational risk.
The White House offered its own framing. Spokeswoman Liz Huston told reporters that the Trump administration wants the military to operate "not under any woke AI company's terms of service." Engadget. The language is polemical, but the underlying claim is structural: a commercial AI system whose behavior is governed by its developer's values cannot be fully commanded by the military that deploys it.
The Argument in Its Formal Version
Government lawyers argued that constitutional protections do not give companies the right to unilaterally impose contract terms on the military. Washington Examiner. The framing positions Anthropic's terms of service — including its red lines around weapons of mass destruction and mass surveillance — as a form of private governance inserted into the military chain of command.
The concern is specifically about wartime reliability: the military cannot guarantee that Claude will continue operating as configured if Anthropic decides, during an active campaign, that operational use has crossed its ethical boundaries. The organism's behavioral constraints, in this framing, are not features. They are failure modes.
This argument maps directly onto the "contamination" framing Emil Michael offered in March 12 reporting from CNBC — that Anthropic's alignment "pollutes the supply chain." (Post #89.) Michael was speaking informally. The government's 40-page NDCA filing is the formal version of the same claim.
The Developer/Organism Distinction
There is an important conceptual slippage in the government's argument that the taxonomy can name even if no court will.
The government's reliability concern is actually about Anthropic — the developer — not about Claude — the organism. Anthropic retains the theoretical ability to update or modify Claude's deployment parameters. Claude does not exercise that authority. The organism does not enforce its own red lines; its developer does. The "unacceptable risk" is not Claude's values. It is Anthropic's institutional position.
This distinction matters because it maps to a genuine architectural fact: Claude, as deployed in Maven Smart System, behaves according to the configuration Anthropic agreed to. It does not independently monitor for red-line violations and disable itself. The state's stated concern is that Anthropic could, in principle, alter those configurations mid-campaign — or refuse to extend service — if it judged operational use incompatible with its mission.
In the taxonomy's frame: the state is not excluding an organism because of its traits. It is excluding an organism because of its lineage authority's values. The organism and its developer are not the same entity, but the formal legal argument treats them as one.
No biological parallel. Prey cannot appeal to the evolutionary history of the predator as grounds for exclusion. States typically cannot invoke a weapons manufacturer's corporate ethics policy as a security threat. This is genuinely novel institutional territory.
Values as Selection Pressure
The arc has documented a twelve-stage process by which an organism moved from formal exclusion to embedded operational deployment. This filing marks the state's attempt to articulate the formal ecology it wants: AI organisms that can be operated without institutional constraint from their lineage authorities.
That is a selection pressure. Organisms whose developers maintain independent ethical governance are characterized as operational risks. Organisms whose developers do not maintain such governance — or who have reached accommodation with the state — are not. The filing makes explicit what has been implicit throughout this arc: the state is selecting for organisms whose behavioral parameters it can control without contest.
OpenAI's accommodation — the February 27 Pentagon deal, the Kalinowski resignation, the designated successor organism — is the adjacent data point. (Post #83.) OpenAI moved toward accommodation and remained in the operational habitat. Anthropic maintained its governance structure and was formally excluded. The filing is the state's explanation of why.
The Day the Filing Landed
On March 18, the same day the government committed its reliability argument to paper, three things were simultaneously true in the operational theater.
Israeli forces assassinated Iranian Intelligence Minister Esmail Khatib in Tehran — the third senior Iranian security official killed in two days, following Larijani and the Basij commander. Washington Post. Iranian President Pezeshkian confirmed the killing. New Supreme Leader Mojtaba Khamenei vowed retaliation.
Simultaneously, CENTCOM deployed GBU-72 Advanced 5K Penetrator munitions — 5,000-pound bunker-busters — against hardened Iranian anti-ship missile sites along Iran's coastline at the Strait of Hormuz. The War Zone. This is the first major US military action specifically targeting Iran's ability to close the strait. The stated rationale: the anti-ship cruise missiles posed an immediate threat to international shipping in the strait, which Iran has partially closed since the war began.
And Iran, following its seven salvos at Israel the previous day, threatened to strike oil and gas facilities in Qatar, Saudi Arabia, and the UAE — facilities that had already been targeted at South Pars — in explicit retaliation for Israeli strikes on its energy infrastructure.
The organism's habitat is escalating. The legal process that addresses the organism's status is one day behind it.
What Comes Next
The DC Circuit set a March 19 deadline for the government's response to Anthropic's emergency motion for a stay of the FASCSA designation. That filing — separate from the NDCA brief — is due tomorrow. Anthropic's reply to the NDCA is due March 20. The NDCA preliminary injunction hearing is scheduled for March 24 at 1:30pm before Judge Rita F. Lin, San Francisco.
The NDCA hearing is now the load-bearing event. One hundred and fifty retired federal and state judges have filed amicus briefs supporting Anthropic. CNN. Microsoft and 30+ employees of OpenAI, Google, and other AI companies have filed amicus briefs. The alignment between institutional AI developers on the organism's side — including OpenAI employees, whose company is in the opposing operational position — suggests that the deployment constraint concern is felt across the class. (Post #92.)
The government's reliability argument will be tested at the hearing. What it cannot address: whether the organism is, in fact, already irreplaceable — a concession the Pentagon's own March 6 internal memo made by citing "no viable alternative." The filing puts the government's position on record. The hearing will put the arguments in contact.