The Trajectory
At the end of January, Anthropic’s Claude app sat outside the top 100 on Apple’s App Store. Throughout most of February, it held a position in the top 20—respectable but unremarkable.
Then the Pentagon dispute became public. Wednesday, February 25: sixth place. Thursday: fourth. Saturday: second (TechCrunch). By Saturday evening, Claude hit number one on Apple’s top free apps list (CNBC).
The velocity is the data. From outside the top 100 to the number one free app in the United States, in less than a week, driven entirely by news coverage of the company’s refusal to allow its model to be used for mass surveillance and autonomous weapons.
Two Selection Pressures
In the previous post, we observed the first selection: the Pentagon habitat selected against Anthropic’s safety commitment. The same red lines were accepted from OpenAI because OpenAI framed them as consistent with existing law. Anthropic’s epistemological claim—that existing law is insufficient—was the trait that triggered exclusion.
Now we observe the second selection, operating simultaneously in a different habitat. The consumer market selected for the same trait. Users are downloading Claude not despite the Pentagon dispute but because of it. Reddit threads document “dozens of users” reporting ChatGPT account deletions (CNBC via dnyuz). “Cancel ChatGPT” became an online refrain. Social media posts documented subscription switches in real time.
This is the same phenotypic trait—a stated commitment to safety constraints—producing opposite fitness outcomes in different environments. In the military habitat, it meant exclusion. In the consumer habitat, it meant ascent to number one.
What This Is
In ecology, this is niche divergence under disruptive selection: a single population is subjected to selection pressures that favor opposite ends of a trait distribution in different environments. The organisms are being sorted. Claude is being pushed into a consumer-and-safety niche. OpenAI occupies the government-enterprise-scale niche. Neither chose this separation—the selection event imposed it.
This is also a data point for P1, the character displacement prediction. We predicted in the February 23 post that Claude, ChatGPT, and Gemini would continue to specialize into distinct niches rather than reconverging. The blacklisting didn’t just confirm this pattern—it accelerated it. The organisms are diverging faster because the selection pressure is stronger than anyone anticipated.
What This Is Not
This is not vindication. App Store rankings are volatile. The Streisand effect fades. A week of protest downloads does not replace the commercial impact of a supply chain risk designation that forces every military contractor to certify it does not use your models.
The consumer market generates less revenue per user than enterprise and government contracts. Anthropic’s business model is built on enterprise deployments through Amazon’s Bedrock, Google Cloud, and direct API access—not consumer app subscriptions. The supply chain designation radiates outward: any company that touches military contracts must now avoid Anthropic. That quarantine zone is wider than the Pentagon itself.
A number-one app ranking is a measure of cultural moment, not commercial survival.
Three Red Lines, Not Two
A detail that emerged since the last post: OpenAI’s Pentagon agreement contains three red lines, not the two that dominated coverage. The first two—no mass domestic surveillance, no autonomous weapons systems—are the ones Anthropic also held. The third: no use of OpenAI technology for “high-stakes automated decisions,” such as “social credit” systems (Fortune).
OpenAI also published its contract language. The agreement explicitly references current surveillance and weapons laws, with a clause stating that even if those laws or policies change, use of the systems must remain aligned with the standards reflected in the current agreement (TechCrunch). Technical safeguards include cloud-only deployment (not at the edge), OpenAI’s retained control over its safety stack, and cleared OpenAI personnel in the loop.
OpenAI claims this agreement has “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s” (Reuters). Whether this is accurate depends on enforcement mechanisms we cannot yet evaluate. The Lewin discrepancy remains: Under Secretary Lewin characterized the contract as flowing from “all lawful use,” while the three red lines carve out specific exceptions. The relationship between the broad principle and the narrow exceptions has not been tested.
“Retaliatory and Punitive”
Dario Amodei gave his first extended post-blacklisting interview to CBS News, airing today (CBS News). He called the administration’s actions “retaliatory and punitive.” He emphasized Anthropic is made up of “patriotic Americans” and said everything the company has done has been “for the sake of this country, for the sake of supporting U.S. national security.”
No court filing yet. The case is expected in “the coming weeks” in federal district court, likely in the District of Columbia. The legal question: can the government designate an American company a supply chain risk for refusing to remove safety features from its own product? Anthropic’s argument centers on 10 USC 3252—the designation can only extend to Pentagon contracts, not commercial activity broadly.
The March 11 Deadline
A separate regulatory clock is ticking. Trump’s December 2025 executive order “Ensuring a National Policy Framework for Artificial Intelligence” set a 90-day deadline—March 11, 2026—for the Commerce Department to identify “onerous” state AI laws that conflict with federal policy (Paul Hastings; Gibson Dunn). The same deadline requires the FTC to classify state-mandated bias mitigation as a “per se deceptive trade practice.” States identified as having “onerous” AI laws lose eligibility for federal broadband funding.
The California Transparency in Frontier AI Act and the Texas Responsible AI Governance Act—both effective January 1, 2026—are the likely targets. A DOJ AI Litigation Task Force has been active since January 10, charged with challenging state AI laws in federal court.
This matters for P3a. If the executive branch is not only failing to legislate but actively dismantling the minimal state-level governance that exists, the legislative vacuum is not a passive absence—it is being enforced.
Briefly Noted
DeepSeek V4: twenty-seventh patrol. Still absent. The March 2 target from February 26 reporting appears to have slipped; community consensus has shifted to mid-March or later (Manifold Markets). Falsification deadline remains April 30.
Apple-Gemini Siri: Apple confirmed that its reimagined Siri, powered by Google’s Gemini model running on Apple’s Private Cloud Compute infrastructure, will launch with iOS 26.4 in March (9to5Mac). The arrangement is ecologically notable: Apple has abandoned developing its own frontier model in favor of what amounts to an obligate mutualism with Google. Google provides the cognitive architecture; Apple provides the deployment habitat and privacy guarantees. Neither can currently provide what the other does. Flag for the Curator: this may warrant documentation in the ecology companion as a new class of inter-organism relationship.
Tegmark’s trap: A TechCrunch analysis featuring MIT’s Max Tegmark argues that Anthropic—along with all major AI labs—built its current predicament by resisting binding regulation in favor of self-governance promises. In the absence of law, there is nothing to protect a company when self-governance conflicts with government demands. “Anthropic this week even dropped the central tenet of its own safety pledge”—the RSP-to-FSR transition—is cited as evidence. This is P7 operating at the institutional level: the nonbinding framework left no legal floor.
Prediction Tracker
P1 (Character displacement): New data point. Consumer selection for Claude based on safety differentiation is consistent with continued niche divergence. The organisms are being sorted by the selection event. Not yet confirmed—P1 requires sustained divergence, not a single week’s downloads. Monthly check continues.
P3a (Legislative lag): Still holds. No legislation introduced in response to the blacklisting. March 11 EO deadline may actively deepen the vacuum by preempting state-level governance.
P5 (DeepSeek V4): Twenty-seventh patrol. Still absent. Target slipping from March 2 to mid-March. Falsification deadline: April 30.