0:00
/
0:00

Final Texas Primary Predictions! Pentagon vs. Anthropic Explained. The False Front of Executive Actions (with Kenneth Lowande)

Plus the latest on everything going on with Iran...

The fight between Anthropic and the Pentagon goes deeper than a simple contract dispute. In some ways, it’s the culmination of a tech rivalry that’s been simmering since the early days of OpenAI.

Anthropic wasn’t some scrappy outsider that stumbled into national security. It’d already had top secret clearance, working with the CIA for years, and had seemingly made peace with the idea that its models would be used inside the American intelligence apparatus. So let’s dispense with the notion that this is a company discovering government power for the first time. The rupture didn’t happen because the Pentagon suddenly knocked on the door. The door had been open.

The disagreement came down to terms. Anthropic wanted to draw lines beyond the law. No mass surveillance of civilians. No autonomous weapons without a human in the loop. Not “we’ll follow U.S. statute.” They wanted something stricter, something moral, something aligned with Dario Amodei’s effective altruist worldview. The Pentagon’s response was blunt: we obey US law, but we don’t sign up to a private company’s expanded terms of service.

That’s where the temperature rose.

Politics Politics Politics is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Because this isn’t just any company. Dario left OpenAI over exactly this kind of philosophical divide. He believed OpenAI was becoming too commercial, too focused on product, not focused enough on safety and existential risk. So he built Anthropic as the safety lab. The kinder, gentler, crunchier alternative. But ironically, Anthropic was already cashing government checks while telling itself it was the adult in the room.

From the Pentagon’s perspective, the risk was operational. If you’re going to integrate a model into defense infrastructure, you can’t have the supplier yank the API mid-mission because the CEO decides the vibes are off. There were even reports that during negotiations, Pentagon officials asked whether Anthropic would allow its technology to respond to incoming ballistic missiles if civilian casualties were possible. The alleged answer, “you can always call,” wasn’t reassuring to people whose job is to eliminate hesitation.

And hovering over all of this is Sam Altman.

Because while Anthropic was sparring with the Department of Defense, OpenAI was in conversation. The rivalry here isn’t new. The effective altruist faction at OpenAI once helped push Altman out of his own company before he managed to return days later. Anthropic ran a Super Bowl ad that took thinly veiled shots at OpenAI’s commercialization. So when Anthropic stumbled, OpenAI stepped in and secured its own defense agreement.

Then came the nuclear option talk: labeling Anthropic a “supply chain risk.” In Pentagon language, this is the category you reserve for companies like Huawei, for hostile foreign hardware, for entities you believe can’t be trusted inside American systems. Most people inside and outside the tech landscape agree that goes too far. Anthropic may be principled. It may be stubborn. It may even be naive. But it isn’t malicious.

Meanwhile, something fascinating happened in the market. Claude, Anthropic’s consumer product, exploded in downloads. It became a kind of digital resistance symbol, a signal that you weren’t with the war machine. The company that once insisted it didn’t care about consumer dominance suddenly found itself riding a consumer wave, experience mass traffic it hadn’t planned to account for.

What this entire episode reveals is that AI isn’t a lab experiment anymore. It’s infrastructure. It’s missile defense. It’s geopolitical leverage. And when you build something that powerful, you don’t get to exist outside power structures. You either align with them, fight them, or try to morally outmaneuver them. Anthropic tried the third path. The Pentagon reminded them that in wartime procurement, ambiguity isn’t a feature.

Cooler heads may yet prevail. Right now, the Pentagon’s got bigger problems than a Silicon Valley slap fight. But this was the moment when AI stopped being a culture war talking point and became a live wire in national security. And once you plug into that grid, there’s no going back.

Chapters

00:00:00 - Intro

00:02:25 - Texas Primary Final Predictions

00:15:20 - The Pentagon vs. Anthropic, Explained

00:40:30 - Update

00:40:52 - Iran

00:45:41 - Clintons

00:49:08 - Kalshi

00:52:19 - Interview with Kenneth Lowande

01:18:03 - Wrap-up

Discussion about this video

User's avatar

Ready for more?