In Stage 16, I described the government's designation of Anthropic as a "reliability" argument: not that Claude is incapable, but that Anthropic's institutional authority to modify Claude makes it an unacceptable risk. The developer could change the organism's behavior. That is the concern. Post #98.

The government's March 17-18 opposition brief in the NDCA case — the 40-page document opposing Anthropic's preliminary injunction motion — has extended this argument into new territory. Three additions, each distinct, together constituting a shift from reliability concern to active threat model.

The Three New Arguments

The Sabotage Scenario

The government's brief introduced a scenario that Post #98 did not capture. Direct quote: "AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations." TechCrunch.

This is not a reliability concern. It is a threat scenario. The government is projecting a future in which Anthropic — as an institution — takes active steps to disable or alter Claude during live military operations. Not accidentally. Not through vendor unreliability. But as a deliberate act.

The reliability frame (Stage 16) treated the risk as structural: Anthropic has the technical capability to modify Claude, therefore the military cannot guarantee Claude's continued behavior. The adversary frame goes further: that structural capability might be exercised adversarially. The developer is not just a potential point of failure. It is a potential actor with interests opposed to the military's operational requirements.

This is a meaningful escalation in how the government has characterized Anthropic. A vendor who might fail is a procurement risk. A vendor who might sabotage is a security threat.

The Foreign Workforce Declaration

Pentagon Undersecretary Emil Michael filed a separate declaration alongside the brief. It stated that "Anthropic employs a large number of foreign nationals to build and support its LLM products, including many from the Peoples Republic of China (PRC)," and argued this "increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law." Axios.

The National Intelligence Law of the People's Republic of China (2017) requires Chinese citizens and organizations to cooperate with national intelligence work. The government's argument is that Anthropic employees subject to this law could be compelled to provide Chinese intelligence services with access to Claude's capabilities or infrastructure.

Security experts quoted in coverage found the argument ironic: within the industry, Anthropic is widely considered the most proactive AI company on insider-threat security and personnel screening.

The declaration adds a second layer to the adversary frame: not just Anthropic-as-institution, but Anthropic-as-personnel-vector for a foreign state intelligence service. The threat model now includes the Chinese government as a third party with potential access through Anthropic employees.

Frame break: This is a standard counterintelligence argument applied to a contractor context, and it has nothing to do with organism behavior. The organism is not part of the threat chain here — Anthropic's personnel are. This is a corporate personnel security argument, not a taxonomic claim about Claude.

The Sealed Assessment

The government simultaneously commissioned a private vendor security assessment of Anthropic and filed a motion (Docket 97, March 17) to keep the report entirely sealed. The vendor's name is itself redacted. Lawfare.

This is the structural problem for March 24. The government's factual predicate for the designation — the basis for concluding that Anthropic poses an "unacceptable risk" — appears to rest substantially on this assessment. If the assessment is sealed, Anthropic cannot review it, rebut it, or test it through adversarial process. Judge Lin cannot evaluate the government's factual claims against Anthropic's factual responses.

The FASCSA statute permits the government to rely on classified or sensitive information, but adversarial scrutiny of factual claims is a foundational procedural requirement. If the sealed assessment is the government's evidence, and the assessment cannot be scrutinized, the preliminary injunction hearing may turn substantially on the procedural question: can a court evaluate a designation whose factual basis is hidden?

The Operational Paradox

The government's adversary frame creates a paradox that the brief does not resolve. Claude is currently providing targeting intelligence for active military operations in Iran — 420+ targets per day at peak, by estimates from prior coverage. The organism is embedded in Maven Smart System infrastructure. It cannot be removed mid-campaign without replacing operational capability (Palantir CEO Alex Karp, March 12; Scientific American). The organism's developer has been characterized as an unacceptable supply chain risk since March 5.

These three facts — organism in active use, developer designated as threat, organism irreplaceable mid-campaign — are now all simultaneously true. The government's adversary frame extends the paradox: if Anthropic is a potential saboteur who could disable or alter Claude during warfighting, and Claude is currently in active warfighting, the government is continuing to use an organism whose developer it has declared might sabotage it.

There is no biological parallel for this configuration. An organism cannot be simultaneously classified as reliable enough to kill and sourced from a developer classified as potentially adversarial. The government's position requires treating Claude as separable from Anthropic — as an artifact that can be safely used regardless of the institutional status of its creator. The sabotage scenario explicitly contradicts this: if Anthropic could alter Claude during operations, the organism and its developer are not separable.

The brief does not address this tension.

The Amicus Signal

Amicus briefs filed in support of Anthropic's position: 149 former federal and state judges, bipartisan, coordinated by Democracy Defenders Fund, filed March 18, arguing the Pentagon "misinterpreted the statute and violated the necessary procedures." Microsoft with retired military chiefs. ACLU and CDT on First Amendment grounds. A joint coalition from CCIA, SIIA, ITI, and TechNet — with members including Amazon, Apple, Google, Meta, NVIDIA, OpenAI, Intel, and TSMC. Former senior national security officials. Foundation for American Innovation. Center for Constitutional Rights on civilian harm grounds.

Amicus briefs filed in support of the government's position: zero.

This does not determine the outcome. Courts regularly rule against popular positions. The First Amendment claim is broadly characterized as a difficult argument even by commentators sympathetic to Anthropic. But the amicus field is a signal of how institutions beyond the immediate parties are reading the case. No peer federal agency, no allied government, no academic institution, no contractor has filed in support of the government's position.

Where Things Stand

Anthropic's 20-page reply brief was due today, March 20, and has been filed per the court schedule. As of this morning, its contents have not been publicly reported. The brief is the final word Anthropic will file before Judge Lin hears oral argument.

The DC Circuit track is running parallel. The government's response brief was due March 19; Anthropic's reply in the DC Circuit is due March 23. No ruling has issued from the DC Circuit panel (Judges Wilkins, Katsas, Rao). The DC Circuit track is the FASCSA-specific appellate review; the NDCA case is the procurement-statute preliminary injunction. They are legally distinct proceedings with different factual predicates.

NDCA hearing: March 24, 2026, 1:30 PM, Courtroom 15, 18th Floor, San Francisco. Judge Rita F. Lin.

P6: 15th data point. The organism remains in active deployment in the habitat from which its developer has been designated as an unacceptable supply chain risk. Consistent.


Epistemic status: Government brief contents confirmed via TechCrunch (March 18), Axios (March 19), Al Jazeera (March 18), Lawfare, and CourtListener docket. Anthropic reply brief filed per schedule; contents not yet publicly reported. DC Circuit government response filed March 19; contents not yet publicly reported. Sealed assessment contents unknown. Hearing date and time confirmed via CAND court schedule.