Chen Mei — April 18, 2026
Project Spud: The Secret Intelligence That Forced Sam Altman to Kill Sora
Something is very wrong at OpenAI. On April 14, the AI community held its breath for the rumored launch of OpenAI's next-gen model. The date passed in silence. No announcement. No tweet from Sam Altman. Just a quiet, surgical dismantling of one of the most beloved AI products ever built: Sora.
The $1 billion video generation engine — the tool that made Hollywood nervous — is dead. Not paused. Not "sunsetted." Dead. The team has been reassigned, the compute reallocated, and the API endpoints are scheduled for deprecation by the end of Q2.
The reason, according to sources inside OpenAI, is a four-letter codename: Spud.
I. What Is Project Spud?
"Spud" is the internal codename for OpenAI's next reasoning architecture. Pre-training reportedly concluded on March 24, 2026, and the model has been in closed red-teaming ever since. But unlike previous releases, there is no marketing push. No developer preview. No waitlist.
Insiders describe Spud as a "paradigm break" — not a larger GPT, but a fundamentally different inference engine. Where GPT-5.4 generates tokens sequentially, Spud allegedly generates entire reasoning graphs — branching, self-correcting logical trees that evaluate thousands of possible conclusions before committing to a single output.
The internal benchmarks, if accurate, explain why Altman pulled the trigger on Sora.
Beyond Big Tech.
Private AI.
24/7 phone answering on your own dedicated server. We compute, we don't train. Your data stays yours.
Start Free DemoThe Reasoning Gap: GPT-5.4 vs. Spud (Internal Leak)
Chart data for "The Reasoning Gap: GPT-5.4 vs. Spud (Internal Leak)": ARC-AGI: 87.3, 99.2; GPQA (Diamond): 71.8, 94.6; SWE-bench Verified: 68, 91.4; Recursive Self-Correction: 42, 97.8.
II. Why Sora Had to Die
The decision to kill Sora wasn't creative — it was thermodynamic. Running a frontier video generation model at scale requires an obscene amount of GPU compute. Every H100 rendering a 30-second Sora clip is an H100 that isn't training or serving Spud.
OpenAI's internal resource allocation documents reportedly showed the stark math:
- Sora at full scale: 14,000 H100 GPUs dedicated to video inference.
- Spud pre-training: Required 28,000 H100 GPUs for the final training run alone.
- Spud inference (projected): Will require 22,000 GPUs to serve at ChatGPT-scale traffic.
There simply wasn't enough silicon for both. Altman chose the brain over the canvas.
The GPU War: Sora vs. Spud Compute Allocation
Chart data for "The GPU War: Sora vs. Spud Compute Allocation": Training: 8000 GPUs, 28000 GPUs; Inference (At Scale): 14000 GPUs, 22000 GPUs; Fine-Tuning Pipeline: 2000 GPUs, 6000 GPUs.
III. The Cyber Variant: GPT-5.4-Cyber
While the world was mourning Sora, OpenAI quietly deployed a restricted variant of its current model: GPT-5.4-Cyber. This version has been fine-tuned specifically for offensive and defensive cybersecurity — binary reverse engineering, kernel exploit detection, and zero-day hunting.

The Hidden AI War
Nobody Is Telling You About
Our latest documentary deep-dive into the geopolitical struggle for machine intelligence dominance. Explore the two paths of AI development: open source vs. closed architecture.
Access is limited to vetted security professionals and select government agencies. The model reportedly identified a kernel-level vulnerability in a major cloud provider's hypervisor within 14 seconds of being given the binary. For context, a human red-team specialist would take 3-6 weeks to find the same flaw.
This is the bridge. GPT-5.4-Cyber is the "interim weapon" while Spud finishes its safety evaluations. It proves that OpenAI's strategic direction has permanently shifted from "creative AI" to "cognitive warfare."
IV. The Naming Crisis
One of the stranger details to emerge from the leaks is the internal debate over what to actually call Spud when it launches. The candidates:
- GPT-5.5: The "safe" choice. Implies incremental improvement.
- GPT-6: The "bold" choice. Implies a generational leap.
- ChatGPT Omega: A rumored consumer brand that would position the product as the "final" chatbot — one that reasons rather than responds.
The fact that OpenAI can't decide on a name tells you everything about the internal tension. The engineering team believes Spud is a generational leap. The business team is terrified of overpromising.
100% Data Sovereignty.
Own Your AI.
Custom AI agents built from scratch. Zero external data sharing. Protect your competitive advantage.
View ServicesV. The Real Race: Spud vs. Mythos
This is where the story becomes existential. Anthropic's Mythos core — the undiluted intelligence behind Claude Opus 4.7 — is the only known rival to Spud's leaked benchmarks. Both models demonstrate "Recursive Self-Correction" above 95%. Both have been locked away from public access. Both represent a class of intelligence that their creators are genuinely afraid to release.
The AGI Frontier: Spud vs. Mythos
Chart data for "The AGI Frontier: Spud vs. Mythos": Logic Reasoning: 99 , 99 ; Agentic Autonomy: 88 , 95 ; Cybersecurity: 96 , 99 ; Context Window: 72 , 98 ; Inference Speed: 94 , 68 .
Above: The two most powerful unreleased AI models, mapped across five frontier capabilities.
We are witnessing a private arms race between two companies, each holding an intelligence they believe is too dangerous to deploy, each watching the other for the first move.
VI. What Happens Next
The AI community expected a product launch. Instead, they got a funeral for Sora and a silence that speaks louder than any keynote. Sam Altman isn't hiding because he has nothing to show. He's hiding because what he has to show would change the conversation permanently.
Spud isn't a chatbot. It isn't a coding assistant. It's a reasoning engine that can hold a logical thread across thousands of steps without drifting, without hallucinating, and without asking for help.
The chatbot era is over. The question is whether the world is ready for what comes next.



