Pete Hegseth Got His Happy Meal
On the consequences of three years of doomer propaganda
As you are likely aware, last Friday, Secretary of War Pete Hegseth designated Anthropic a “supply chain risk.” The designation is language historically reserved for foreign adversaries and has never before been publicly applied to an American company. The blow-up between Anthropic and the Trump administration came after months of contract negotiations over a deal worth up to $200 million, negotiations that collapsed because Anthropic held to two exceptions: it would not allow its AI to be used for mass domestic surveillance of Americans, or in fully autonomous weapons. (Max Read has a great summary and analysis of what happened here.)
Read Anthropic’s statement; it’s sympathetic. The two exceptions are narrow and reasonable. Autonomous weapons with current AI are genuinely unreliable — a point that requires no exotic claims about machine consciousness to establish, just an honest acknowledgement at how often these systems hallucinate. Domestic mass surveillance is a Fourth Amendment problem, not an AI problem. Even Sam Altman expressed solidarity — before striking his own deal with the Pentagon, but that’s a different essay.
Something like this was always going to happen. Not because of Hegseth specifically, not because of this administration, but because of the narrative the AI safety community — the world that produced Anthropic, and whose language Anthropic still speaks even while disavowing its label — has been pushing for at least the last three years.
Imagine a six-year-old whose entire media diet includes a steady stream of McDonald’s commercials, a Happy Meal ad at every break, focused on whatever toy is the latest to be included along with the McNuggets. Now put that child in a car that drives past a McDonald’s. What happens?
The Rationalist and Effective Altruist communities — the intellectual cultures that gave us Anthropic, influence many of their employees, and which still shape how Dario Amodei talks about his company and his technology — have spent the better part of a decade insisting, with increasing urgency, that artificial intelligence is the most consequential technology in human history. Maybe it’s civilization-ending; maybe it’s civilization-saving. Either way, it’s the hinge on which everything henceforth turns.
With policymakers and the media largely having accepted the premise, thus surrendered was the argument for treating AI like a normal technology subject to normal governance. Policies being pushed by Effective Altruist groups, like 2024’s SB1047 in California — deprioritize harms happening today for theoretical existential ones in the future; despite the fact that today’s harms that could be existential for the folks experiencing them. These groups incessantly made the case that whoever controls this technology controls the future, and so the hypothetical future needs to be prioritized now. In a Washington now run by people who tend to impulsiveness and contemptuousness of institutional constraint — well, it’s easy to see where this was headed. Hegseth saw the ads for the toy, and so now he wanted his Happy Meal.
Doomers and utopians are not actually opposites. They share the same founding fantasy. Yudkowsky and other catastrophizers worry that a superintelligent AI will exterminate humanity in its quest for resources; Altman and the accelerationists vaguely claim that same superintelligence will cure cancer and solve climate change. These fears and dreams share the same lineage and underlying worldview: what unites them is the unfounded certainty that AI will transform the world in some total, civilization-scale way. That certainty serves the same interests, regardless of the direction it comes from; it certainly has raised the valuation of the companies that profit from the hype.
(To be clear: I am not saying AI is useless, an accusation typically thrown back at critics by those in the doomer camp. It is genuinely capable of things, some of them useful. What large language models do not merit is the valuation and the governance exception that follows from treating it as something other than a powerful, flawed tool that humans are accountable for deploying.)
Yet the prognostications of the doomer community have been, nearly without exception, wrong — not in small ways, but in the foundational sense that the imagined trajectory keeps failing to materialize. That is what happens when your mode of analysis is closer to erotic Harry Potter fan fiction (which is indeed the medium in which Yudkowsky has delivered some of his prognostications) than to actual research and policy: by treating the political and cultural environment as stable and predictable, and treating fallible human actors as game pieces that will respond sensibly to carefully constructed arguments. But the world is messy and policy is often boring. Fan fiction tends to leave out the boring and messy parts, like when Dumbledore has to do his taxes or when Hagrid has to take a shit.
Thus, this news reveals the rationalists’ under-examined blind spot: they cannot model the messy Pete Hegseths of the world, even as their claims whet Hegseth’s appetite. The rationalist view of the world assumes, at some level, that the relevant actors are optimizing for well-understood, predictable variables and a clear understanding of what best serves their self-interest. What it cannot account for is bad faith, impulsiveness, ideological motivation untethered from evidence, random instances of force majeure, and personal whims and petty rivalries. And so while the doomer community spent years warning about uncontrollable AI systems that do things their creators didn’t intend, they apparently did not consider what would happen when the humans currently running the United States government got access to technology they’d been told was the hinge of history.
I’m not assuming or claiming Amodei or Anthropic are acting in bad faith now. Their statement is measured — and if fact, its acknowledgement of LLMs limitations is much more measured than their usual rhetoric. But Anthropic also sought this contract, and Amodei said in a 2023 podcast that there was a 10 to 25 percent chance AI could destroy humanity — a claim he has since tried to walk back, insisting he is “not a doomer.” But that’s just one point in three years of relentless propaganda about the power of this technology and a concerted effort to shape how people in power and influence think about the technology. And when powerful, impulsive people understand they are dealing with something civilization-scale, they respond accordingly.
Anthropic’s position on the government’s use of Claude doesn’t require claims about AI consciousness or sentience or the fate of humanity. You don’t need to hire a philosopher to make the case that AI has a soul to make the arguments they made in their statement. You need only acknowledge what this technology actually is today: a still-yet-unreliable, potentially-commercially-valuable tool, that should be subject to the same liability standards and regulatory frameworks we apply to other technologies. Simply put: Claude can make mistakes, and when Claude is used in applications where a mistake has the potential to irrevocably change existing human lives, a human or company should be accountable for those mistakes.
Anthropic and the broader community of people who spent years insisting this technology changes everything did not intend to create an appetite in Pete Hegseth. They were, by all appearances, genuinely worried about exactly this kind of outcome, if a little short-sighted and ignorant about politics. Good faith does not absolve them of consequences, though — especially when the consequences were predictable from the premise, and anyone objecting to narratives of doom was shouted down. If AI is the most powerful technology in human history, every ambitious actor with access to state power is going to want it, unrestricted, immediately. That is not how anyone was hoping Hegseth might respond, but it is a predictable response to the advertisement.
Is Anthropic — and the community of technologists and rationalists who have been speaking this language for years — willing to do the harder, less glamorous work of treating this technology like what it actually is? That remains to be seen.
My bet, as depressing as it may be, is that they double-down on doom.


“Simply put: Claude can make mistakes, and when Claude is used in applications where a mistake has the potential to irrevocably change existing human lives, a human or company should be accountable for those mistakes.”…no truer words have been spoken than these. It irritates me to no end those who claim that the liability would be too large, and subject to regulations in support of this, it would stunt the development of these “life changing” technologies. To that I say, “good!”, perhaps it would force them to look deeper and work harder to make sure these AI technologies are well understood, predictable, and controllable. Instead we get their claims that it could doom society, but they’ll keep moving things right ahead, as though they have no choice but to proceed irrespective of the potential dangers that they themselves acknowledge.
As for this predictable outcome from Hegseth, that was telegraphed with Trump’s EO and his appointment of tech utopian David Sacks as Czar of tech pipe dreams.
Brilliant.