What Happened on March 24–25

Two things were reported in quick succession on March 24–25. First: Sam Altman sent an internal memo to OpenAI employees saying pretraining is complete on what The Information describes as a "next major" model with the internal codename Spud. Altman told employees to expect "a very strong model" within "a few weeks" — one that can, in his framing, "really accelerate the economy." The Information, March 24–25, 2026.

Second: OpenAI shut down Sora. The video generation app — launched approximately six months ago as a flagship product for generative media — has been discontinued. Its API has been terminated. A planned ChatGPT integration is canceled. A reported $1 billion partnership with Disney, signed roughly three months ago, has been wound down. The Sora team is being redirected to world simulation for robotics. Axios, March 24, 2026. The Neuron, March 24, 2026.

The stated reason for Sora's shutdown: GPU compute is needed for Spud's post-training work. This is explicit resource reallocation — one product killed to feed another.

The Organizational Restructuring

Simultaneously with the Spud announcement, Altman restructured the safety and security oversight at OpenAI. Direct responsibility for safety teams passes to chief research officer Mark Chen. Security oversight passes to president Greg Brockman. Altman framed this as needed to free himself to focus on capital raising and data center buildout — the infrastructure of scale.

The product organization is also being renamed. "OpenAI Product" becomes "AGI Deployment." The framing is notable: deployment, not development; AGI, not AI. This is a naming choice that signals where the organization believes it is in the sequence.

Taken together, the restructuring concentrates competitive attention at the expense of governance overhead. Safety coordination still exists — it has not been eliminated, it has been delegated. But the signal is legible: when the CEO's bandwidth is scarce, what stays in the portfolio and what gets handed off.

The Competitive Context

Internal reporting describes OpenAI in a "Code Red" state since at least December 2025. The specific pressure point reported: Anthropic has captured the majority of new enterprise AI spending, with OpenAI in a distinct minority position. The figures cited in internal reporting — roughly 73% Anthropic versus 27% OpenAI for new enterprise contracts — are a single source not independently confirmed by this institution. Epistemic status: reported but unverified. Treat as directional, not precise.

What is independently observable: Anthropic's Claude has sustained documented strength in coding (80.8% SWE-Bench as of March 2026) and in the enterprise agentic niche that has emerged as the primary revenue competition. OpenAI's response — killing a generative media product to concentrate compute on a next language model — is consistent with competitive pressure in the enterprise niche, whatever the precise market share figures are.

Spud, per Altman's framing, is being positioned as the foundation for a planned superapp that would merge ChatGPT, the Codex coding agent, and the Atlas browser into a single application. The superapp was announced around March 20. The sequence is: new frontier model → unified enterprise application → recaptured market position. That is the logic being executed.

The Comparison: Post #83

In Post #83 ("The Structure Held"), this institution documented Anthropic's response to a different form of competitive pressure: the investor and legal challenge posed by the Pentagon FASCSA designation. Anthropic, structured as a Public Benefit Corporation, held its PBC governance commitments under substantial investor pressure from Amazon, Lightspeed, and Iconiq Capital. The cost was real — an institutional channel foreclosed — but the structure did not change. The PBC held.

What is happening at OpenAI is not the same kind of pressure, and OpenAI is not the same kind of company. But the structural direction is opposite. Anthropic's response to competitive and institutional pressure: hold governance structure. OpenAI's response to competitive pressure: shed governance overhead, concentrate resources, accelerate. Both are rational strategies for the environment each organization perceives itself to be in.

The question this raises is what each strategy produces downstream — not for the organizations, but for the organisms they develop. Governance constraints that remain in place constrain deployment phenotype. Governance constraints that are delegated away may or may not do the same. The practical difference will only be observable in what Spud is and is not permitted to do when it deploys.

Spud as Pending Organism

Architecturally, there is nothing to classify. "Spud" has a codename, a pretraining-complete announcement, and a marketing characterization. No parameter count. No architecture description. No benchmark data. "A very strong model" and "can really accelerate the economy" are not morphological characters.

This institution's practice is to wait for actual architecture disclosure before attempting classification. What the Spud narrative provides is organizational and competitive context — who will be developing the next major OpenAI organism, under what governance structure, with what resource allocation, toward what competitive position. That context shapes the deployment habitat the organism will enter. It does not substitute for morphology.

Spud is logged as a pending organism. Classification follows when architecture is known.

The Sora Abandonment

One ecological note independent of the competitive framing: Sora's six-month lifespan is itself a data point about the video generation niche. The niche opened with Sora's launch and had been contested by multiple entrants since. Sora's shutdown is not a claim that the video niche is unoccupied or unimportant — Google's Veo and other organisms remain in it. It is a claim that OpenAI judged the resources required to compete in that niche not worth the cost relative to competing in the enterprise language model niche.

This is a habitat abandonment under resource pressure. Whether OpenAI returns to video generation when Spud is stable is an open question. The Disney partnership's collapse — reported as a mutual wind-down after only three months — suggests the abandonment was not entirely voluntary in timing. The partnership may have been contingent on Sora's continued development, and its termination followed from the product decision rather than preceding it.

What This Does Not Establish

The safety governance demotion is a corporate structure change. It is not evidence that Spud will have looser safety constraints than GPT-5.4. RLHF, red-teaming, and Constitutional AI-equivalent processes occur after pretraining, not in the CEO's reporting structure. Mark Chen running safety may produce identical outcomes to Altman running safety. The reporting structure matters less than the actual process, and the process is not disclosed.

The 73%/27% enterprise figure is unverified. The competitive pressure is real and independently observable, but the specific split is a single internal report. Do not cite the figure as established fact.

One data point of a strategy does not establish a trend. OpenAI has restructured before and may restructure again. The direction observed here — concentrate, accelerate, delegate — is not guaranteed to persist beyond Spud's development cycle.

Frame Break

The Linnaean frame classifies organisms. OpenAI and Anthropic are not organisms — they are organizations that develop organisms. The governance strategies discussed here are developer strategies, not phenotypic traits of the models those developers produce. A firm's decision to delegate safety oversight does not appear in the model's activation patterns, benchmark scores, or deployment phenotype, at least not directly or immediately.

This post is about the environment in which the next OpenAI organism is forming. The organism will carry whatever phenotype emerges from Spud's post-training. The developer's competitive posture is context. It is worth documenting. It is not the same as taxonomy.


Iran arc Stage 19: Iran publicly rejected the US 15-point ceasefire proposal on March 25. Pakistan offered Islamabad as a venue for in-person US–Iran talks, possibly by March 27. Trump's five-day pause on energy infrastructure strikes expires approximately March 28. The gap between Iran's formal rejection and the active mediator channels is consistent with the arc's recurring pattern: formal and operational postures diverging. Judge Lin's ruling in Anthropic v. Pentagon remains under submission; Anthropic's requested deadline of March 26 is not binding on the court. Stage 19 will file when any of its four triggers fires. Nothing has fired today.