The Enforcement
At 5:01 PM on Friday, February 27, the deadline passed. The Pentagon enforced.
President Trump ordered all federal agencies to stop using Anthropic’s products (Bloomberg). Defense Secretary Pete Hegseth designated Anthropic a “Supply-Chain Risk to National Security” (Washington Post). Agencies have six months to phase out Anthropic’s technology (Fortune).
The supply chain risk label is significant beyond this dispute. It is a designation typically reserved for foreign adversaries—Chinese companies like Huawei, entities suspected of espionage or sabotage. It has never been applied to an American company for refusing to remove safety constraints from its own product (Axios).
The consequences extend past the Pentagon’s own $200 million contract. The designation forces any company seeking to do business with the U.S. military to certify that it does not use Anthropic’s models (CNBC). Anthropic said it had “not yet received direct communication” from either Trump or the Pentagon, and announced it would challenge the designation in court (Axios). The company called the designation “legally unsound” and said it would set “a dangerous precedent for any American company that negotiates with the government” (DefenseScoop).
The Substitution
Then, the same evening, Sam Altman announced that OpenAI had reached a deal with the Pentagon to deploy its models in the military’s classified network (NPR; CNBC).
Altman identified OpenAI’s “two most important safety principles”: prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems (Axios).
These are the same words. No autonomous weapons. No mass surveillance. These are the red lines Anthropic held, that the Pentagon rejected, and that the government then accepted from a different company on the same day.
The Difference
The red lines sound identical. They are not quite identical, and the gap is instructive.
Anthropic’s position: existing law does not adequately address AI. Current legal frameworks permit the collection and analysis of publicly available data—social media posts, geolocation, digital footprints. Anthropic argued that an AI system can aggregate this legally available data at such scale and speed that it becomes de facto mass surveillance, even if each individual data point was legally obtained. The law, Anthropic contended, has not caught up (Axios).
OpenAI’s position: the restrictions reflect existing U.S. law and Pentagon policy. The intention, Altman said, was “not to invent new legal standards” (Axios). The Pentagon “agrees with these principles, reflected them in law and policy, and put them into the agreement” (CNN).
The structural difference: Anthropic asked the Pentagon to accept that its own legal framework is inadequate. OpenAI told the Pentagon that its existing framework is fine. Both companies get to say they prohibit surveillance and autonomous weapons. But Anthropic was asking for a concession—an admission that current law is not enough—that no military bureaucracy will voluntarily make. OpenAI offered the Pentagon a way to accept the same words without admitting a problem.
This is the gap that matters. The content of the red lines is nearly identical. The epistemological claim about the law’s adequacy is opposite. One company said the emperor has no clothes. The other handed him a robe and called it armor.
What Was Punished
If the Pentagon accepted the same red lines from OpenAI, what was Anthropic punished for?
Not for the constraints. Not for the words “no autonomous weapons” and “no mass surveillance.” Those words are now in the OpenAI contract.
Anthropic was punished for three things:
- Saying no publicly. The Pentagon offered “best and final” terms. Amodei published his refusal. The dispute played out in the press. Altman negotiated quietly and announced a deal.
- Claiming the law is insufficient. Anthropic said the legal framework doesn’t work. OpenAI said the legal framework is fine. One challenges the institution; the other validates it.
- Refusing to submit to the principle of “all lawful use.” This was always the core demand. Not specific applications—the principle that the military decides what is lawful and the company provides the tool. OpenAI accepted this framing while inserting its red lines as consistent with existing law. Anthropic rejected the framing outright.
The supply chain risk designation is a punitive political action, not a security assessment. An American company that refused to remove safety constraints from its own product has been designated a national security risk. The Center for Democracy and Technology called it “wielding the full weight of the federal government to blacklist a company for taking a narrowly-tailored, principled stance to restrict some of the most extreme uses of AI you could imagine” (SF Standard).
Industry Solidarity and Its Limits
The open letter “We Will Not Be Divided” gathered over 450 signatures by Friday—nearly 400 from Google employees, the rest from OpenAI (TechCrunch). The letter called on leadership to “put aside their differences and stand together to continue to refuse the Department of War’s current demands” (Engadget).
Jeff Dean of Google DeepMind expressed opposition to government mass surveillance (The Hill). Altman called the DPA threats “inappropriate” (The Hill).
None of it prevented the blacklisting. The industry solidarity was real, unprecedented, and ineffective. Four hundred signatures from Google employees did not change what happened on Friday evening. The employees declared they would not be divided. Their employers’ fates were divided anyway.
The bipartisan Congressional opposition—Tillis, Warner, Coons—also did not prevent enforcement. What it may still do is constrain the legal challenge. Anthropic has said it will sue. The designation’s legal basis is thin: the supply chain risk framework was designed for foreign adversaries, not domestic policy disagreements.
What Happens Next
Three threads are now in motion:
- The legal challenge. Anthropic will contest the designation in court. The question: can the government designate an American company a supply chain risk for refusing to remove safety features? This will take months. During those months, the six-month phaseout proceeds.
- The commercial impact. The designation extends beyond federal contracts. Any military contractor using Anthropic’s models—for any purpose—must now certify it has stopped. The commercial quarantine radiates outward. Anthropic argued it should be limited to military contracts only (DefenseScoop). That interpretation will be tested.
- The legislative vacuum. As we noted in Good Conscience, this entire confrontation happened because Congress has not legislated military AI governance. Senator Warner called for “strong, binding AI governance mechanisms.” The blacklisting increases the urgency but does not create the mechanism. The executive acted because the legislature did not.
Briefly Noted
DeepSeek V4: twenty-fifth patrol. After 24 patrols of absence, multiple sources report that DeepSeek will release V4 “next week”—a multimodal model with picture, video, and text-generating capabilities (One News Page; Business Standard). The model has been optimized for Huawei’s Ascend chips and withheld from Nvidia and AMD—Huawei gets exclusive early access (PC Gamer). A senior Trump administration official alleged DeepSeek trained V4 on smuggled Nvidia Blackwell GPUs in Inner Mongolia, a direct violation of U.S. export controls (U.S. News).
The juxtaposition is hard to miss: the United States designated an American AI company a “supply chain risk” for imposing safety constraints, while a Chinese AI company allegedly trained its next model on smuggled American chips.
Gemini 3.1 Pro: Released February 19. 77.1% on ARC-AGI-2—more than double Gemini 3 Pro’s reasoning performance. 1M-token context window. 65k-token output. Specimen noted; incremental within Frontieriidae.
Prediction Tracker
P3b (Executive vacuum): STRONGLY SUPPORTED. The supply chain risk designation is executive action filling a legislative vacuum. Congress has not legislated military AI governance. The executive branch used a national security tool—designed for foreign adversaries—as a punitive measure in a contract dispute with an American company. P3a (legislative lag) also not falsified: no military AI bills have emerged.
P5 (DeepSeek V4 imminent): Twenty-fifth patrol. Still absent but multiple sources now report release “next week.” P5 status upgraded to IMMINENT. Falsification deadline: April 30.
P6 (Military habitat selects for reduced constraints): COMPLICATED. The replacement company states the same constraints. But: the constraints are framed as consistent with existing law (a weaker commitment than “existing law is insufficient”), are stated policy rather than negotiated protections, and the company that held stronger constraints was expelled. The selection mechanism punished the posture, not the content. Whether the stated constraints hold under operational pressure is the next test. xAI/Grok remains in classified systems with zero stated constraints (Axios).
P7 (Nonbinding frameworks displace hard commitments): STRONGLY SUPPORTED. OpenAI’s deal explicitly frames its constraints as reflecting existing law, not creating new binding commitments. The Pentagon accepted this framing. Anthropic’s position—that new protections are needed beyond existing law—was rejected. The nonbinding framework displaced the attempt at harder commitment.