The Rationalist Cafeteria
What AI doomers, Scientologists, and Mean Girls have in common
In the 2004 film Mean Girls, Lindsay Lohan’s character arrives at her new high school and is immediately initiated into the social cartography of the lunchroom — who sits where, who defers to whom, and most importantly, what happens to those who challenge the reigning hierarchy. The “Plastics,” led by Rachel McAdams’s Regina George, operate a social system with two enforcement mechanisms working in concert: the threat of exclusion from the in-group, and the constant, pointed derision of those already outside it. These mechanisms are not separable; the sneering at outsiders is what makes membership feel valuable, while the fear of being banished to join the losers is what keeps members in line.
I have been thinking about Mean Girls a lot lately in the context of discourse about AI. Blood in the Machine's Brian Merchant did admirable work cataloguing the state of affairs, from the viral "Something Big Is Happening" post that racked up tens of millions of views; Anthropic's coordinated press blitz timed to a $30 billion investment round; the straw-man allegation, repeated with remarkable persistence, that critics think AI is "fake." Merchant's takeaways are worth reading in full, but the one that’s worth bookmarking is his fifth: uncertainty still reigns. Nobody actually knows how this plays out.
Perhaps it’s because I studied the humanities as an undergraduate, but where I come from that sort of epistemic humility is usually a strength, not a weakness. You wouldn’t know that from the discourse, as not just Merchant has found but also Gary Marcus, Freddie deBoer, and others. Journalists, policy experts, academics, and critics who raise questions about AI's capabilities relative to the hype, its political economy, or whether the underlying economics of these companies and their investments pencil out are dismissed not on the merits but as the ravings of members of a benighted class who simply don't get it. To put it in Silicon Valley argot, critics like me have “bad sense-making.”
But the allegation that critics think AI is "fake" functions as a kind of Mean Girls social technology. It doesn't engage the actual critique as wrong or not; rather, it classifies the critic as the kind of person as being so wrong that they don’t need to be taken seriously. This is a Regina George move, signaling to friends that this person is not one of us.
But more importantly, reclassifying the critic as uninformed is a way of avoiding the evidentiary burden that those making big claims about AI should shoulder.
A belief that the emergence of a machine superintelligence is nigh and will fundamentally change or destroy human civilization is a classic Russell’s Teapot scenario. Bertrand Russell's illustration goes like this: if I claim there is a teapot orbiting the sun, too small to be detected by any telescope, you cannot prove me wrong — which means that the burden of proof is on me, not you, to prove that the teapot is there. In other words, if a claim is empirically unfalsifiable, the burden of proof rests squarely on the one making an empirically unfalsifiable claim. Those claiming we are on a runaway train toward developing a superintelligence that will either exterminate humanity or solve all our problems are making an empirically unfalsifiable claim — there is no data or evidence that exists or could exist rebutting it.
For me, folks who are “feeling the AGI” have not met the burden of proof. Forgive me if I don’t think that makes me dumb.
There is a much bigger market in the attention economy for confidence than there is humility, and the AI boomer and doomer brands — superficially opposed, structurally identical — have proven remarkably good at capturing it. Dario Amodei gets a New Yorker profile and Anthropic’s effective altruist philosopher gets a WSJ exposé. Joe Rogan and Ezra Klein and Bill Maher ask earnest AI engineers questions about the end of the world. The catastrophe narrative and the utopia narrative are both simple enough to fit in a tweet, feel important, and are unfalsifiable enough that no evidence can dislodge them.
This is low-hanging fruit. It is much easier to get booked on Jon Stewart to augur the end of the world than it is to make a carefully hedged argument about the limitations of LLMs and the financial incentives of venture-backed AI companies. The former makes for engaging television, while the latter requires the audience to hold several things in tension simultaneously (which television — and most viral content — is not built to reward). To bring it back to Regina George and high school: it takes less skill to hit the puniest freshman in gym class in the head with a dodgeball than it does to hit a varsity athlete.
The media attention also must be genuinely seductive and corrupting. If your YouTube video and podcast episode are being shared by people with hundreds of thousands of followers, if you get invited to Davos, the social feedback most of us would take is that I must be doing something right. Accumulated status, in this discourse, thus does part of the epistemological work that evidence is supposed to do. When a well-networked figure who hypes AI or prophesies doom dismisses a critic, that figure’s social capital closes the argument that the evidence cannot.
Social capital plays another role, in that it provides a rallying point for the less well-networked true believers — the ones who may not be performing certainty for a live studio audience but have parasocial or even social relationships with those who do. For many of these folks, the unfalsifiable claim does not register as unfalsifiable at all; there’s a religiosity to it, and sharing or repeating the claims of one’s favorite doomer or utopian is like spreading a gospel. Here, the analogy is not to Mean Girls but to Going Clear.
Just as L. Ron Hubbard constructed an elaborate, internally consistent pseudoscience about human psychology that attracted intelligent, technically-minded true believers with a lot of social capital, the rationalist and effective altruist movements have constructed an elaborate, internally consistent pseudoscience about machine superintelligence that has attracted intelligent, technically-minded true believers with a lot of social capital. And like Scientology, their intelligence and social capital serve to defend the faith rather than explore hard questions about empirical evidence.
Scientology persists because its internal logic is, within its own premises, coherent; because membership confers real social rewards; and because the cost of questioning or the thought of abandoning it, once adopted, is very high. For adherents to rationalism and effective altruism who are concerned with existential risk from AI — and I know not all adherents to EA orient around AI risk, but only a subset — the movement has an analogous internal coherence. Once you internalize and believe the premise that a machine god threatens humanity’s survival, you see no need to prove the teapot is there to anyone; there just must be something wrong with those who don’t see it like you do. Poor savages — if only they could see it.
What makes this particularly hard to name is that these folks are not, by disposition, what we would normally call bullies. They are not Nelson Muntz, laughing while they give you an atomic wedgie. They, by and large, present as gentle, earnest, thoughtful people who care about the future of humanity, may well be vegan, and know about bed nets and malaria. But when it comes to AI discourse, effective altruists and rationalists are part of an in-group that self-identifies in opposition to an out-group, just like Regina George — it’s just that their social cruelty is laundered through the vocabulary of concern.
The reality is that the AI engineering, AI safety, effective altruist, and rationalist communities in the Bay Area overlap and are remarkably tight-knit. They share conference circuits, Signal chats, Burning Man experiences, and often enough, polycules and living arrangements. When your professional and personal world is populated entirely by people who share a set of beliefs about existential risk, those beliefs become genuinely difficult to separate from facts. This is how cults work: the beliefs persist because the social world has been constructed so that questioning them means potentially losing everything — colleagues, friends, status, sometimes even your job or housing.
None of this means that AI isn’t small-t ‘transformative’ in many ways, won’t get better, and won’t raise serious legal, economic, or philosophical questions. It already has. The concentration of power in a handful of companies with no democratic accountability is already a generation-defining crisis, and even weak forms of the technology seem poised to intensify that concentration. I see more thoughtful philosophical commentary about what it means to be human today than I did as a humanities student twenty years ago. The possibility that systems we don’t fully understand could produce consequences we can’t fully anticipate deserves scrutiny.
But all of that is different from unfalsifiable prophecy, and it’s to prophecy that I object. And I know I’m not alone in finding “Just trust me, I’m smarter than you about computers” not a compelling argument to reorganize society around.
What managing AI requires is thus what the current discourse prevents: the capacity to hold uncertainty without either monetizing it as apocalypse or suppressing it as heresy. The teapot might be there; I’m willing to grant that. As it was for Russell, my point about the teapot isn’t that it doesn’t exist, but rather that the person who insists it does should be the one embarrassed by the conversation, not the person asking for proof.



