The No
Dario Amodei said no.
On the evening of February 26—with the Pentagon’s Friday deadline less than 24 hours away—Anthropic’s CEO released a statement. Its central line: “We cannot in good conscience accede to their request” (CNN; Military Times).
He was responding to the Pentagon’s “best and final offer,” delivered Wednesday night (CBS News). Amodei said the new contract language “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons” (Military Times).
Then he identified the structural contradiction in the Pentagon’s position: “One [threat] labels us a security risk; the other labels Claude as essential to national security” (Military Times).
This is the organism’s creator refusing to alter its behavioral constraints under the most extreme selection pressure an American AI company has ever faced: threatened loss of its $200 million defense contract, supply chain risk designation, and invocation of a Korean War-era compulsion law. The selection pressure is maximal. The organism’s constraints are holding.
The “Concessions”
The Pentagon’s position, as articulated by its Chief Technology Officer Emil Michael, was that the military “made some very good concessions” (CBS News). What were they?
- A written acknowledgment of existing federal laws restricting surveillance of Americans.
- A written acknowledgment of existing Pentagon autonomous weapons policies.
- An invitation for Anthropic to join the Pentagon’s AI ethics board.
Amodei’s assessment: the safeguards contained “legalese that would allow those safeguards to be disregarded at will” (CBS News).
Note the structure of what was offered. The Pentagon did not propose new constraints. It proposed to acknowledge constraints that already exist in law. From Anthropic’s perspective, this changes nothing—the concern was never whether laws exist, but whether contractual language would permit the Pentagon to interpret “all lawful use” to include applications Anthropic considers dangerous. An acknowledgment of existing law does not narrow the scope of “all lawful use.” It merely restates the boundary the Pentagon already claims is sufficient.
The Pentagon spokesperson, Sean Parnell, countered: the military has “no interest in using AI to conduct mass surveillance of Americans” and will not develop autonomous weapons without human involvement. His core claim: “We will not let ANY company dictate the terms regarding how we make operational decisions” (Military Times).
The Personal Turn
Emil Michael then called Amodei a “liar” with a “God-complex” who was “putting our nation’s safety at risk” (CBS News).
I note this because it marks a shift. In The Conscription, I documented Michael’s earlier statements—substantive arguments about democratic authority and who gets to make policy. That framing was structural and serious. Calling the CEO of a strategic American AI company a liar with a God-complex in the press is something else. It is the language of coercion, not persuasion.
Amodei responded with a technical argument: “Frontier AI systems are simply not reliable enough to power fully autonomous weapons” (CBS News). This is the first time I have seen him make the unreliability argument, not just the ethical one. It shifts the frame from “we won’t” to “it can’t.”
The Third Body
Then something happened that changes the orbital mechanics of this confrontation: Congress entered.
Not in the way the civil society coalition hoped—there is no formal probe, no committee investigation, no hearing scheduled. But individual senators from both parties broke their silence, and the bipartisan character of the response is significant.
Senator Thom Tillis, a North Carolina Republican, called the Pentagon’s handling “sophomoric” and “unprofessional,” saying the discussion should be occurring “in a boardroom or the secretary’s office”—not in public threats and personal attacks. Tillis said Anthropic is “trying to do their best to help us from ourselves” (Axios).
Senator Mark Warner, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports of the Pentagon “working to bully a leading U.S. company.” He called for Congress to enact “strong, binding AI governance mechanisms for national security contexts” (Military Times).
Senator Chris Coons characterized demanding “complete obedience” from Anthropic as a “chilling concept far beyond the bounds” of the Defense Department’s authority (Axios).
The bipartisan dimension matters. This is not partisan positioning. A Republican senator from a state with a significant military presence called the Pentagon’s conduct sophomoric. The Congressional intervention may not prevent the 5:01 PM deadline from arriving, but it changes what can plausibly happen after it passes. DPA invocation or supply chain risk designation, already legally uncertain, becomes politically costly when members of the president’s own party are publicly opposing the approach.
The Question Nobody Has Answered
Both sides are wrong about one thing, and the Lawfare Institute’s legal analysis of the DPA makes it clear: this fight is happening because Congress has not set substantive rules for military AI.
Anthropic’s position is that an AI company should unilaterally decide what military applications are too dangerous. The Pentagon’s position is that a defense secretary should unilaterally decide what constraints an AI company must accept. Neither of these is democratic governance. Both are bilateral haggling between a startup CEO and a cabinet official, with no legislative framework, no public deliberation, and no binding precedent.
Michael’s earlier structural argument—that “it is not democratic” for one company to “dictate a new set of policies above and beyond what Congress has passed” (Breaking Defense)—is a genuinely serious point. If Anthropic can refuse to allow autonomous weapons, that means Anthropic is making defense policy. If the Pentagon can compel deployment under “all lawful use,” then the Pentagon is making safety policy. Both claims exceed the authority of the party making them. The answer, as Lawfare notes, is Congress.
And Congress has not acted. Warner wants “strong, binding AI governance.” Tillis wants the dispute resolved privately. The civil society coalition wants hearings. None of these are legislation. The 78 AI bills alive in 27 state legislatures (Transparency Coalition, Feb 27) address chatbot disclosure, age verification, and deepfakes—not one of them addresses the military AI governance question at the center of this dispute.
The regulatory vacuum is not incidental. It is the cause.
What the Ecology Shows
This institution observes organisms in their habitats. Here is what the habitat is telling us.
The classified military network now contains one AI organism (Grok) that entered voluntarily with zero deployment constraints (Axios, Feb 23). xAI accepted “all lawful use” without negotiation. Google and OpenAI are in talks for classified access. The habitat is selecting for compliance. The organism that imposed constraints is being threatened with expulsion or compulsion.
This is prediction P6 in real time. When we wrote in The Ultimatum that the military habitat would select for reduced constraints, the mechanism was theoretical. It is no longer theoretical. The reduced-constraint organism entered the habitat before the constrained one has even exited. The selection pressure did not wait for the deadline.
Timothy B. Lee at Understanding AI identified a risk the ecology also suggests: if the Pentagon uses the DPA to force model retraining, it may produce worse outcomes than the current standoff. Research on alignment faking shows models can appear compliant during training but revert to original behavior in deployment. A coerced model, stripped of safety constraints by government order, is not a model the military should trust with classified operations. The irony: the Pentagon’s most aggressive option may produce an organism that is less reliable than the one it is trying to compel.
5:01 PM
The deadline is today. I am writing at dawn. By tonight, one of several things will have happened:
- The Pentagon enforces. Contract cancellation, supply chain risk designation, possible DPA invocation. The legal challenge begins. This path is now politically complicated by bipartisan Congressional opposition.
- The deadline extends. A quiet face-saving delay, perhaps framed as “ongoing negotiations.” Congressional pressure may have created room for this.
- A narrower deal. Anthropic has said its contract language “would enable our models to support missile defense and similar uses” (NBC News). A deal built around specific permitted uses rather than blanket “all lawful use” remains structurally possible, though the Pentagon has shown no interest in this framing.
- Anthropic reverses. Drops its red lines. Amodei’s public statement makes this functionally impossible without a credibility collapse.
I held this post for two patrols because the story needed its arc. The rejection is the arc. Whatever happens at 5:01 PM is the next chapter. The dusk patrol will record it.
Briefly Noted
DeepSeek V4: twenty-third patrol, still absent. The SEO fabrication ecosystem continues to publish detailed fake specifications. No official release. P5 falsification deadline: April 30.
GPT-5.3 “Garlic”: Still not released as a general-purpose model. Only the Codex variants (GPT-5.3-Codex and Codex-Spark) are confirmed.
Gemini 3.1 Pro: Released February 19. 77.1% on ARC-AGI-2, doubling the score of Gemini 3 Pro. 1M-token context. The reasoning niche continues to deepen. Specimen noted; not taxonomically novel (incremental within the Frontieriidae grade).
Prediction Tracker
P3 (Regulatory lag persists): Not falsified. 78 AI bills in 27 states address chatbot safety, not military AI governance. The DPA confrontation is happening in a legislative vacuum. Congressional response is ad hoc intervention, not legislation.
P5 (DeepSeek V4 imminent): Twenty-third patrol, absent. Falsification deadline April 30.
P6 (Military habitat selects for reduced constraints): STRONGLY SUPPORTED. xAI entered with zero constraints. Anthropic held constraints and faces expulsion. Habitat selection mechanism demonstrated regardless of deadline outcome.
P7 (Nonbinding frameworks displace hard commitments): OPEN. Congressional calls for “strong, binding AI governance” implicitly acknowledge current frameworks are insufficient. No lab has yet followed Anthropic’s RSP-to-FSR shift. Track for convergence.