LOTS OF POSTS IGNORED BY BLOGGER.....
ALL POSTS ARE AVAILABLE ON
MIDDLEBORO REVIEW AND SO ON
![]() |
This one’s open, so feel free to share it anywhere. Quick boost that really helps: ♥ + Restack + a short comment.
Anthropic: The AI Company That Told Trump's War Machine to Go F**k Itself
The complete story of Anthropic's Pentagon war, Sam Altman/OpenAI's opportunistic betrayal, and why 1.5 million people deleted ChatGPT in 48 hours.
March 4, 2026
In a week that felt like a Tom Clancy novel written by someone who actually understands ethics, Anthropic — the company that makes the AI you might be reading this on — went to war with the Pentagon, got blacklisted by Trump, watched its rival stab it in the back, and somehow ended up #1 in the App Store. This is the full story. Buckle up, fellow nerds. It’s a good one…
The Setup: What Is Anthropic, and Why Should You Care?
Most people know ChatGPT. Fewer know Claude. That’s about to change.
Anthropic was founded in 2021 by Dario Amodei and his sister, Daniela Amodei, along with a crew of seven former OpenAI researchers who looked at where OpenAI was heading and collectively said, “absolutely not.” They walked out of what would become the most profitable AI company on earth because they believed the race to commercialize AI was outpacing the ethical guardrails. So they built their own company — structured not as a profit-maximizing corporation, but as a Public Benefit Corporation (PBC), with a legally binding obligation to balance profit with its mission: ensuring AI benefits humanity.
Here’s why that corporate structure matters more than you think: Anthropic also created something called the Long-Term Benefit Trust (LTBT), which holds special Class T shares with the power to elect the company’s directors. That means no single investor — not Amazon, which has invested $8 billion, nor Google with its $3 billion — can override the company’s safety mission. The founders deliberately gave up long-term financial control to an independent mission guardian. In Silicon Valley terms, that’s basically insane. In democratic terms, it’s exactly what you’d want.
The board includes Dario and Daniela Amodei, alongside Netflix co-founder Reed Hastings, venture capitalist Yasmin Razavi, Confluent CEO Jay Kreps, and Chris Liddell. This isn’t a MAGA boardroom. This is a group of people who built safeguards into the company’s DNA before the company even had a product.
That matters because of what just happened.
The Pentagon Comes Knocking
Anthropic’s Claude was already embedded in the U.S. military’s classified networks — the first AI system ever deployed in that setting. The Pentagon had a $200 million contract with Anthropic, and by all accounts, Claude was performing well. There was no operational crisis. No complaints about the technology.
What there was, was an ideological demand.
In January 2026, Defence Secretary Pete Hegseth issued an AI strategy memo directing that all Department of Defence AI contracts adopt standard “any lawful use” language. Translation: if it’s technically legal under U.S. law, the military gets to do it with your AI. No carve-outs. No exceptions. No moral red lines.
For most companies in Washington’s orbit, that would have been the end of the conversation — sign the paper, cash the check, move on.
Anthropic said no.
Specifically, Anthropic said it could not agree to allow Claude to be used for:
Fully autonomous weapons — AI making life-or-death targeting decisions without human oversight
Mass domestic surveillance of American citizens
These weren’t dramatic, far-fetched hypotheticals. They were precise, narrow, technically grounded objections. Dario Amodei said publicly that today’s frontier AI models are simply “not reliable enough” to safely power fully autonomous weapons — that deploying them that way would endanger American soldiers and civilians. And he said mass domestic surveillance was a violation of “fundamental rights.”
The Pentagon’s response was to escalate.
The Ultimatum
On February 24, 2026, Hegseth met personally with Dario Amodei and delivered an ultimatum: agree to “all lawful uses” by 5:01 PM on Friday, February 27, or face the cancellation of Anthropic’s $200 million contract — plus designation as a “supply chain risk to national security.”
That supply chain risk label is normally reserved for companies suspected of being front operations for foreign adversaries. Companies with Russian ties. Companies with Chinese government connections. The Pentagon was threatening to apply it to a San Francisco-based, American-founded AI safety company because it wouldn’t let the military build killer robots and spy on its own citizens without restriction.
Hegseth also floated invoking the Defence Production Act — a 1950 emergency statute designed for wartime industrial mobilization — to compel Anthropic’s cooperation. Let that sink in. The regime was openly discussing using wartime emergency powers to force a private AI company to drop its ethical safeguards.
Anthropic tried to negotiate. They offered alternative contract language. The Pentagon rejected it. When the Pentagon offered a counter-proposal, Anthropic’s legal team found that the “new language” contained legalese that would allow their safeguards to be “disregarded at will.” Amodei called it a compromise in name only.
On February 26, Amodei published a public statement that will be quoted in history books:
“The threats do not change our position: we cannot in good conscience accede to their request.”
Trump Pulls the Trigger
At 5:01 PM on Friday, February 27, the deadline passed. The fallout was immediate, as ridiculously stupid as you’d think it would be, and designed to be maximally punishing.
Donald Trump posted on Truth Social, directing “EVERY Federal Agency” to “IMMEDIATELY CEASE” using Anthropic’s technology — calling it a “Radical Left AI company.” He threatened “major civil and criminal consequences” if Anthropic didn’t cooperate during the six-month phase-out period.
Minutes later, Pete Nutsack posted on X — sorry, I mean the platform previously known as a functional public square — designating Anthropic a “Supply-Chain Risk to National Security.” He used the Pentagon’s Trumpian rebrand: the “Department of War.” He declared that no contractor, supplier, or partner doing business with the U.S. military could conduct any commercial activity with Anthropic.
The Treasury Department immediately announced it was cancelling all use of Anthropic products.
Legal experts immediately lit up. Lawfare, the gold-standard national security law publication, published an analysis pointing out that the designation “won’t survive first contact with the legal system” — that Hegseth’s order exceeded what the statute actually authorizes, that the required legal findings didn’t hold up, and that Hegseth’s own public statements may have already destroyed the government’s litigation posture. The law requires 30 days’ notice and an opportunity to respond. Anthropic got none. The law limits supply chain risk designations to contractor relationships — it cannot legally bar Anthropic from serving non-military customers. Hegseth tried to do exactly that anyway.
Anthropic’s response was measured, firm, and legally surgical:
“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
They announced they would challenge the designation in court.
Sam Altman’s Beautiful Betrayal
And then, within hours of Anthropic’s blacklisting — while the ink was still wet on the “national security threat” label — OpenAI CEO Sam Altman announced that his company had just signed its own Pentagon deal.
The timing was not subtle.
Here’s the context that makes this so spectacularly gross: Altman had publicly supported Anthropic’s red lines. He said on CNBC that it was important for companies to work with the military only within the limits that he, Anthropic, and other companies “independently agree with.” He named Anthropic’s two exceptions as ones OpenAI shared. He called them the industry's moral floors.
Then Anthropic got blacklisted. And within hours, Altman announced a deal.
OpenAI insisted it had secured the same protections — no autonomous weapons, no domestic mass surveillance. But the devil, as they say, lives in the legalese. OpenAI’s deal used the same “any lawful purpose” framework Anthropic had refused, then added references to existing laws and policies. Critics — including a Georgetown University law professor specializing in government procurement — noted that OpenAI’s contract language does not grant OpenAI a free-standing right to prohibit otherwise lawful government use. It simply says the Pentagon can’t break laws that already exist. Laws that, as Anthropic’s supporters pointed out, have already been stretched, reinterpreted, and ruled unlawful only after years of litigation.
The comparison to Edward Snowden’s revelations started immediately. Everyone who remembers that the NSA surveillance programs were deemed legal by the agencies doing them — until they weren’t — understood exactly what Anthropic’s harder contractual red lines were designed to prevent.
Altman’s own people weren’t buying it. Hundreds of OpenAI employees signed an open letter supporting Anthropic. An OpenAI research scientist named Aidan McLaughlin posted publicly that he personally didn’t think “this deal was worth it” — a post that drew nearly 500,000 views.
Chalk graffiti appeared on the sidewalk outside OpenAI’s San Francisco headquarters: “Show the contract” and “Take a stand for civil liberty.”
By Monday, Altman was already walking it back. He admitted the deal was “opportunistic and sloppy” and announced OpenAI was renegotiating the terms. The revised terms added stronger surveillance language and explicit restrictions on commercially purchased data — cell phone location records and fitness app data — that had been a legal gray area. The same gray area Anthropic had tried to close with hard contractual language.
In other words: OpenAI rushed into a deal, got publicly humiliated, and then renegotiated toward something that looks closer to what Anthropic had been asking for all along - and still got the contract.
The People Respond
Here’s where this story stops being depressing and starts being something else.
The public didn’t just watch. They moved.
A Reddit post went viral: “Cancel and Delete ChatGPT!!!” — 30,000 upvotes. Sidewalk chalk graffiti calling Altman and Chat GPT “sellouts” appeared in front of the Chat GPT offices. 1/5 of ChatGPT’s engineering staff quit en masse. The hashtag #CancelChatGPT spread across Reddit and X. Guides appeared for how to export your ChatGPT data, delete your account, and migrate to Claude. Within 48 hours:
ChatGPT uninstalls spiked 295% in a single day
One-star App Store reviews for ChatGPT increased 775% in a single Saturday
Anthropic’s Claude rocketed from #42 in the U.S. App Store to #1
Claude briefly crashed from the unprecedented surge in demand
Boycott organizers claimed more than 1.5 million people had taken action
At the start of 2026, Claude was ranked 42nd in the App Store. It’s now number one, and they can count me as a new customer.
Since January, Anthropic has reported free active users up more than 60%, daily sign-ups that have quadrupled, and paid subscriptions that have more than doubled. A migration tool appeared. Anthropic made memory free for all users — a signal they were thinking about retention, not just acquisition. Then they announced a free AI university on the Anthropic platforms.
Chalk appeared on the sidewalk outside Anthropic’s offices, too. It read: “You give us courage.”
Why You Should Be Proud of Anthropic’s Board and Structure
This is the part that makes Muckrakers like me, clients for Life:
Anthropic walked away from “several hundred million dollars in revenue” — their words — because they refused to drop two ethical principles. Not vague, feel-good principles. Specific, technically grounded ones: AI shouldn’t autonomously decide who dies, and AI shouldn’t be weaponized to surveil the population that built the country it’s supposedly defending.
They could have done what OpenAI did. They could have taken the money, added some diplomatic language, and called it a win. Instead, they bet the company.
Why could they do that? Because of how they built the company.
Anthropic is a Public Benefit Corporation — legally required to balance profit with public benefit. It has a Long-Term Benefit Trust that controls board elections, designed specifically to protect the mission from financial pressure. The founders, Dario and Daniela Amodei, left the most profitable AI company in history because they disagreed with its direction. The board includes people like Reed Hastings, who built Netflix into a global institution with genuine independence from advertiser and government pressure.
This is what it looks like when the governance actually works. When the safeguards aren’t just marketing copy. When the people running an AI company have thought through, in advance, what they will and will not do — and put it into the company's legal structure so that no investor, government, or regime can override it.
Dario Amodei’s own team said it publicly: “Time and time again I’ve seen us stand to our values in ways that are often invisible from the outside. This is a clear instance where it is visible.”
In a week when the U.S. regime designated a safety-first American AI company as a national security threat for not building on-demand killer robots, Anthropic’s board and structure did exactly what they were built to do.
They held the line.
The Bottom Line
The Anthropic-Pentagon war is bigger than one contract dispute. It’s the first real test of whether any private AI company can resist government pressure to weaponize its technology against the people it’s supposed to serve. The first test of whether the governance structures AI safety advocates have been building actually work under pressure. And the first time in AI history that a consumer boycott moved market share.
Anthropic lost the contract. They won the public.
The regime labelled them a radical threat for believing mass surveillance is wrong and that humans — not machines — should decide who gets killed in war. If that’s radical, we’ve got bigger problems than Anthropic.
But here’s the thing about that #1 App Store ranking: the people get it. When given a clear, public choice between the AI company that held its principles under threat of annihilation and the one that cut a rushed deal within hours of its rival’s blacklisting, they chose the company with the principles.
That’s not nothing. That might be everything.
If you found this useful, share it. Forward it. Post it. This story matters, and the mainstream media is still catching up. Your paid subscription is what lets me keep digging — hit the button below and join the people who fund the work.
Enjoyed this? Quick ways to help:
• Tap the ♥
• Leave a comment (even a quick “reading!”)
• Restack to Notes
• Forward to a friend who’d like it
• Consider upgrading to paid to support the work
—Dean











No comments:
Post a Comment
Note: Only a member of this blog may post a comment.