Trump and Altman Made Amodei a Martyr — Which He's Not
One slapped in the face. The other stabbed in the back. And the guy left standing looks like a saint — until you look closer.
The Best Marketing Campaign Nobody Planned
Two of the most powerful men in the world — one running the country, one running the company synonymous with AI — managed to turn Dario Amodei into a martyr. A saint. The last honest man in Silicon Valley.
This martyrdom became the most effective marketing campaign in AI history. By Monday, Claude was #1 on the App Store. Then the servers crashed from demand. Anthropic didn't spend a dollar on it (well, besides the Super Bowl ad that started the first wave).
Here’s the thing — he’s not a saint. But to understand why that matters, you first have to understand how the martyrdom was manufactured.
Act 1: The Face Slap from DOW
The timeline is now well-documented, so I’ll keep this brief.
Anthropic told the Pentagon “No” — refused to allow Claude for mass surveillance and autonomous weapons targeting
Sam Altman publicly backed Anthropic — told employees and press he shared “the same red lines”
Trump kicked Anthropic out of the entire federal government — not just DoW, but Education, HHS, NASA, everything. Slapped them with a “supply chain risk” designation previously reserved for Chinese and Russian companies.
Hours later, Altman signed the Pentagon deal himself — on those “same red lines” he just publicly supported
OpenAI announced $110B in funding at $730B valuation — Amazon $50B, Nvidia $30B, SoftBank $30B
Now, the “supply chain risk” designation is worth pausing on. That’s a tool the US government has historically used against Huawei, Kaspersky, and companies with alleged ties to hostile foreign intelligence services. Trump used it against an American AI company — one whose product was already running on classified military networks — because its CEO said no to a change of contract.
Act 2: The Back Stab from OpenAI
If Trump slapped Amodei in the face, Altman stabbed him in the back. And the back stab did more damage to Altman himself than to Amodei.
Altman publicly told employees he shared Anthropic’s red lines. Then he signed the Pentagon deal. Then he claimed the terms were equivalent. But if the terms were truly identical, why would the government switch? The Pentagon already had Claude on its classified network. CENTCOM validated it in live combat. You don’t replace a battle-tested system with an equivalent for no reason.
Platformer and The Verge confirmed this. OpenAI’s Pentagon contract boils down to three words: “any lawful use.” If it’s legal, the military can do whatever it wants with OpenAI’s technology. The laws Altman cited — EO 12333, FISA, the National Security Act — are the same legal framework that enabled the mass surveillance programs Snowden exposed in 2013. All “lawful.”
Miles Brundage, OpenAI’s former head of policy research, said it on X: “OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.” (He later softened the “caved” language — but the core point stands.)
This didn’t happen in a vacuum. Altman’s contradiction fits a pattern well-documented elsewhere — from board testimony to contract disputes to licensing controversies. Each incident follows the same structure: support in public, undercut in private.
In a TechCrunch interview, Altman admitted the deal was “definitely rushed” and “the optics don’t look good (spoiler alert: it’s not just optics)”. He then rushed to host an unprompted AMA on X — a sign he felt the narrative slipping.
The net effect: Trump through punitive retaliation and Altman through his own behavioral pattern manufactured Dario Amodei’s martyrdom without Dario having to do anything at all. He just had to stand still and say no.
Act 3: The Martyrdom (a.k.a. The Best Marketing Campaign in AI History)
And the martyrdom worked. Not because of PR — because it created a cascading transfer of legitimacy across every dimension that matters.
The Tesla problem: For OpenAI
This is the structural point most people miss. Tesla’s core customers are affluent metro buyers; when the company leaned into politics that alienated that base, the brand eroded with its actual buyers.
Now look at AI: who are the biggest spenders and heaviest users? Tech companies, startups, developers, and researchers concentrated in major tech hubs. By absorbing the Pentagon’s “any lawful use” mandate, OpenAI introduced a fundamental friction with its core customer and employee base.
Consumer: From Banned to #1
ChatGPT’s traffic was already eroding — web share from 86.7% to 64.5%, mobile from 69.1% to 45.3%. The Super Bowl ad accelerated it. The Pentagon incident finished it.
A “Cancel and Delete ChatGPT!!!” post on Reddit hit 30K upvotes. CancelChatGPT.com launched. As of March 1, Claude hit #1 on the US Apple App Store — also #1 in Germany and Canada.
And then Claude crashed. Monday morning, all consumer apps went down for five hours. Anthropic’s explanation: “unprecedented demand.”
When a government ban and your competition’s betrayal generates so much demand that it literally takes down your servers, you’re not being punished. You’re being marketed.
Talent: Signatures and Solidarity
In the current state of AI war, losing talent is scarier than losing users.
Sutskever, Murati, Schulman — the exodus from OpenAI was already well underway. Sam Altman is the only active co-founder left at the company he co-founded. Then 97 current OpenAI employees — alongside 772 from Google — signed “We Will Not Be Divided” in solidarity with Anthropic.
In 2023, OpenAI employees rallied to bring Altman back. In 2026, they’re rallying against him. A 180-degree reversal in three years.
There’s a saying in the industry: fewer than 100 people are truly elite AI engineers. For them, moving from OpenAI to Anthropic isn’t a risk anymore — it’s pre-IPO equity, mission alignment, and distance from a company that sold out its users to the government. A rational career decision.
Every vector, flowing in one direction. A martyrdom so perfect it almost looks scripted.
Almost.
Act 4: But He’s Not a Martyr
Here’s where I have to be honest — because the narrative is so clean right now that everyone’s forgetting the messy parts. And in my experience, when a narrative is this clean, that’s exactly when you should start pulling threads.
Dario Amodei is not a saint. He just looks like one because the contrast is so stark
Thread 1: The Safety Pledge He Dropped — The Day Before
On February 25th, one day before Anthropic told the Pentagon “No,” Anthropic quietly dropped its Responsible Scaling Policy commitment — the voluntary pledge not to deploy frontier AI models unless external safety reviews confirmed they were safe.
The timing is remarkable. Anthropic co-founder Jared Kaplan justified it with logic that could have come directly from OpenAI: “It doesn’t make sense to stay on the sidelines when competitors are pushing ahead.” The same “competitive pressure” argument that Anthropic was founded to reject.
On Day 1, Amodei dropped his company’s signature safety commitment because the competition was too fierce. On Day 2, he told the Pentagon his conscience wouldn’t allow it. Both things can be true simultaneously — but together, they complicate the narrative considerably.
Thread 2: Claude Was Already in the Kill Chain
The same day Amodei was saying “No” to the Pentagon, Claude was already being used by CENTCOM — through Palantir’s Maven platform — for active military operations, including the Iran strike planning. According to Axios and NBC News, this included kinetic targeting. An Anthropic executive called Palantir to express concern; Palantir reported the call to the Pentagon.
The “principled refusal” wasn’t about keeping AI out of warfare. Claude was already there for other operations including Venezuela. The refusal was about specific expansions — mass surveillance, bulk data collection, autonomous targeting. Credit to Amodei for drawing the line. But the popular narrative — “Anthropic refused to let its AI be used by the military” — is factually wrong.
Thread 3: The Safety Blowback
Dario has publicly claimed a “25% probability of catastrophic AI events.” At Davos, he compared selling AI chips to China to “selling nuclear weapons to North Korea.” Fortune described his long essay on AI’s existential risk as “as much marketing as prophecy.”
Here’s where it gets structural. That fear wasn’t just rhetoric — it was strategy. Anthropic lobbied hard behind Biden’s AI executive order. They backed California’s SB 1047 after it was weakened to favor incumbents. Meta’s Yann LeCun accused them of lobbying to restrict open-source models — which directly serves Anthropic’s competitive interest. David Sacks, the administration’s AI czar, has been calling this “regulatory capture” for months. He has a point.
And here’s the irony: when you spend years convincing the world that AI is nuclear-weapon-grade dangerous, you can’t be surprised when the government says “great, we’ll regulate it like nuclear weapons.” This “Safety Blowback” — apocalyptic safety rhetoric, deployed for fundraising and regulatory advantage, eventually creates the political conditions for the very government overreach that threatens your business. Anthropic built the rhetorical scaffolding. Trump just moved in.
Palmer Luckey and Ben Thompson’s question — “Do you believe in democracy?” — cuts both ways: when a company lobbies for safety regulations that also happen to kneecap competitors, is that principled advocacy or business strategy wearing a lab coat?
TechCrunch called it “The Trap Anthropic Built for Itself.” By branding so heavily on safety, any compromise — no matter how pragmatic — reads as hypocrisy. The RSP rollback, the Palantir situation, Dario’s own admission that autonomous weapons “may prove critical for national defense” — each one widens the gap between brand and reality. The “safety at all costs” positioning is already being quietly retired.
So What?
Two men converged to create a narrative moment: Trump through punitive retaliation, Altman through contradiction. Together, they manufactured the most effective martyrdom and marketing campaign in AI history. Dario Amodei didn’t have to campaign for sainthood. He just had to say “no” while everyone else said “yes.”
But being made a martyr doesn’t make you a saint.
Dario dropped his safety pledge the day before the Pentagon “No.” Claude was already in the kill chain when he drew his red line. His company lobbied for regulations that kneecapped competitors. And the fear-mongering that built Anthropic’s brand also built the political conditions for the very ban.
So the question that actually matters isn’t moral — it’s structural: does this martyrdom convert into durable market position?
Let’s put the damage in perspective. The federal ban costs Anthropic roughly $200M in government contracts. Against $4-5B+ in ARR by most estimates, that’s less than 5% — not small, but worth the brand equity gained. But also, brand equity without retention is just a sugar high. Claude is #1 on the App Store today, but protest downloads don’t equal product loyalty. We will need to keep track of this.
Those 97 OpenAI employee signatures on “We Will Not Be Divided” are meaningful. In an industry where fewer than 100 people are truly elite AI researchers, and many of them are already OpenAI alumni, this moment is structurally significant. The question worth watching: do they actually leave? If, over the next 90 days, even five or six top-tier researchers make the move, the talent flywheel becomes self-reinforcing — and that’s when the martyrdom converts into something structural.
Two villains, one accidental hero, and zero saints. Welcome to AI in 2026.
We’ll keep watching.
Ian




