A little more than a year ago, OpenAI CEO Sam Altman ignited a minor scandal after it came to light he had evidently commissioned a Scarlett Johannson soundalike to provide one of the voices for a new version of ChatGPT, after Johannson had herself declined to lend her voice to the product. The smoking gun was a tweet of Altman’s days before the release that only read, “Her.”
For the uninitiated, Altman’s tweet is a reference to an Oscar-winning Spike Jonze film, Her. The film stars Ms. Johannson as the voice of Samantha, an AI assistant that a man played by Joaquin Phoenix falls in love with. (By the film’s end, Samantha AI transcends her confinement to hardware, leaving humans behind to go back to being forced to relate to one another.) The film’s futuristic setting is a clean and crisp veneer covering a deeper, empty loneliness that the human characters suffer from, one that feels inevitable given our current societal trajectory. Despite the fact that in the end Phoenix’s character reconnects with another human love interest, it’s a bittersweet ending: the clear takeaway is about AI’s potential to eclipse human intimacy.
The human-AI relations in Her would thus not seem an obvious model for a tech executive rolling out an AI assistant. So why would Altman risk alienating a popular cultural figure and raise further questions about his integrity just to reference a film with a dystopic take on the product he is building?
I can imagine three possibilities:
Altman failed to comprehend that the world of Her is not a world in which most of us want to live.
He understands the film and that most of us don’t want to live in its world, but simply doesn’t care, for whatever reason.
He believes he can exert sufficient control over the technology to prevent the dystopic outcome.
I don’t think the first point explains it. While “autistic tech bros don’t get that sci-fi is dystopic” has become a cliche,1 Altman in particular has cultivated a reputation as the sensitive one among tech oligarchs, which is one reason why tech journalists and policymakers continue to take what he says at face value; his apparent emotional IQ makes him a good salesman, even if his mendacity sometimes overshadows his salesmanship.
On the second, were it true that he simply doesn’t care, that would make Altman no different from countless other plutocrats, whether fictional or historical: Mr. Potter in It’s a Wonderful Life, Ebenezer Scrooge, Jay Gould, Cornelius Vanderbilt, Jeff Bezos, et cetera. Regardless of whether that’s true, it’s not a very interesting claim to examine here.
Today’s essay is about examining the third possibility.
My anecdotal observation is that many in tech — even the worst actors — do in fact recognize the dystopic risks of sci-fi stories that inspire them and would agree that we should avoid dystopian outcomes. What drives them forward despite those risks isn’t ignorance or indifference, but rather a conviction that they — unlike the characters in the cautionary tales — possess the capability to harness these powerful technologies without suffering the consequences their literary predecessors faced.
This is hubris in its classical form, and understanding why requires looking not just at science fiction, but at much older stories about human nature and the limits of what we can control, and what happens when we aren’t realistic about either.
Mass-Produced Magic
Part of what drives “builders,” and part of why tech companies still enjoy fairly high approval among general citizens, is the idea that they are making magic. Arthur C. Clarke’s famous formulation holds that any sufficiently advanced technology is indistinguishable from magic. Your good faith technologist, inspired by that fictional magic, wants to create the sufficiently advanced technology to make it real. Whatever their personal or ethical failings, this clearly drives oligarchs from Altman to Musk to Bezos. They aren’t driven by the thrill of buying something for a dollar and selling it for two, but rather by the thrill of conjuring something from nothing, bending reality to human will through the force of code and engineering.
And on these terms, these men have succeeded in creating magic. Most of the things you can do with a smartphone would have gotten you burned at the stake for witchcraft four centuries ago. The builders of these systems understand this viscerally, and the feeling as if you’ve touched something fundamental about how the universe works and found a way to reshape it according to your specifications must be truly intoxicating.
But what good science fiction does — true of work by Ted Chiang, Octavia Butler, Isaac Asimov, and more — is to use the magic of advanced technology to build a world where themes about what it means to be human can be explored. Star Trek, at its best, does this well. The U.S.S. Enterprise’s replicator that can produce any food or object on demand isn’t interesting because it’s magical; it’s interesting because it forces us to confront questions about what happens to human purpose and meaning when material scarcity disappears. The transporter that can disassemble and reassemble a person at the atomic level isn’t only a convenient plot device to get crew members to the planet’s surface (though it was originally also that); it raises profound questions about identity, consciousness, and what makes us who we are.
This is science fiction’s wonderful trick: it uses the lens of advanced technology to examine the permanent features of human nature under novel conditions. As Chiang has put it, the magic of sci-fi technology is “mass-produced,” meaning the benefits can accrue to all. (This mass-production stands in contrast to fantasy, where magic accrues only to select, chosen individuals — Harry Potter, Gandalf, and Luke Skywalker, for example.) If anyone can replicate the technology, then power structures built on previous regimes of scarcity and exclusivity must adapt or collapse. Thus, fictional technologies provide the method by which we can explore humanity in novel ways, regardless of whether the characters are exploring space, time, the depths of the sea, or journeying to the center of the earth.
Science fiction has tended to be politically progressive in orientation as a result of the way it imagines different orderings of society made possible by magical technology (again, by comparison to fantasy, where worlds of wizards and elves and dragons tend to have immutable castes). I believe this is one reason why Silicon Valley began as aligned with the left in the United States, and why tech executives like Marc Andreessen have not responded well to being villainized by Democrats. They think they have given something magical to the world, and becoming fabulously wealthy along the way is just a well-earned reward for genius. When they distribute smartphones that give billions of people access to the world’s information, or create platforms that let anyone with an internet connection build a business, they genuinely believe they are using advanced technology to lift humanity toward a better future.
This self-conception isn’t entirely wrong. There’s something genuinely democratizing about technology that makes powerful capabilities available to ordinary people. A kid in a Rio favela with a smartphone has access to more information than any pope, medieval king, or emperor of the Ming dynasty ever did. GPS makes us all into Ferdinand Magellan. These are real improvements in human capability, and the people who built these systems can rightfully take some credit for them.
The Sorcerer’s Apprentice
Where tech builders reveal their fundamental misunderstanding of the cautionary tales they claim to admire is in their conviction that dystopian outcomes result from anything but technology’s inherent relationship to human nature. They believe in their own good intentions and abilities as the difference maker in avoiding the dystopian outcomes.
In other words, the way in which it’s possible to reconcile tech executives being inspired by dystopian science fiction is that they take the struggles about human nature that make good sci-fi literature as flaws in tech design and execution, without closely examining how magic that changes the rules of the world provides an opportunity for human nature to disappoint in new ways. Mark Zuckerberg’s belief that connecting the world is and will always be an unmitigated good is something that he seems to deeply believe. He appears to think that the harms caused by Facebook — the genocidal violence in Myanmar and Ethiopia, the political polarization and Russian disinformation, the teenage mental health crisis — are implementation problems, bugs to be fixed through better algorithms and more thoughtful feature design.
This brings us to a much older story, the Sorcerer’s Apprentice, most famously depicted in Disney’s Fantasia with Mickey Mouse in the titular role. The apprentice, left alone in his master’s workshop, puts on the sorcerer’s hat and uses a spell to animate a broom to carry water, automating a tedious task. The magic works perfectly: the broom fetches water with tireless efficiency. But the apprentice lacks the wisdom to understand the consequences of his cleverness. He doesn’t know how to make the broom stop. He splits the broom in half, creating two water-carriers instead of one. Soon the workshop is flooding, and only the sorcerer’s return prevents the disaster from escalating further.
The apprentice’s mistake wasn’t technical incompetence; the spell worked exactly as intended. His mistake was assuming that successfully executing the magic was the same as wielding it. He lacked the wisdom that comes from years of studying not just what magic can do but what happens when you deploy it into a world full of human beings and their infinitely creative ways of producing unintended consequences. The sorcerer wouldn’t have animated the broom without first ensuring he could stop it, without thinking through what happens when a system designed to “carry water” encounters no instruction about when carrying water should cease.
Today’s AI builders are in the same position as Mickey Mouse, though not in the way they think, in the sense of losing control to a superintelligence. They’ve successfully animated the brooms; the large language models can generate coherent text, the recommendation algorithms can predict what content will keep you engaged, the companion AI can conduct conversations that feel intimate and personal. The magic works. What they lack is the wisdom to understand that making the magic work is the easy part. The hard part — the part that requires the kind of experience and humility that comes from watching your previous magical experiments flood the workshop — is understanding not just how these systems interact with the darker corners of human nature, but that they are unlikely to solve what happens in those corners with still more technology.
Tech As Tragedy
But there’s an additional lesson from literature here that goes back even further, into classical mythology. That flaw is hubris, and the myth that best illuminates what tech builders are failing to understand is Oedipus Rex.
The myth of Oedipus the King is the one where a prophecy says Oedipus will murder his father and marry his mother, bringing shame on his family. Oedipus, horrified by this prophecy, flees the city where he was raised to escape his fate. In his travels, he encounters an older man at a crossroads, gets into an argument with him, and kills him in a fit of rage. He then arrives at Thebes, solves the riddle of the Sphinx, and is made king, marrying the widowed queen. Years later, when a plague descends on Thebes, Oedipus discovers the terrible truth: the man at the crossroads was his biological father, and the queen is his biological mother. His attempts to avoid the prophecy have led him directly into fulfilling it. His hubris — his conviction that he could outsmart fate through his intelligence and determination — compounded his doom.
While it’s the mythical Fates who are said to punish Oedipus in some tellings, in Sophocles’s version, it’s his simultaneous hybristic overconfidence paired with his inability to understand human nature itself that dooms him — the same mistake tech companies are making today. The Fates, in this reading, are really the embodiment of human nature’s unchanging features. Oedipus’s anger at the crossroads, his pride in solving the Sphinx’s riddle, his determination to pursue the truth even when warned to stop are all expressions of who Oedipus is, with his very human flaws; the Fates didn’t make him do any of these things. The prophecy wasn’t a supernatural curse, but rather a recognition that certain human traits, under certain conditions, produce bad outcomes.
Consider the parallels to how tech builders think about their work. Oedipus sees himself as Thebes’ savior: he solved the Sphinx’s riddle, he’s a rational king who will do whatever it takes to save his city from the plague. That self-image blinds him to the possibility that he could be the source of Thebes’ pestilence. We are all Oedipus: our conviction in our own goodness makes us blind to our complicity in harm.
Sam Altman sees himself as working to build artificial general intelligence that will cure cancer and solve climate change. Mark Zuckerberg sees himself as connecting the world. Elon Musk sees himself as making humanity multiplanetary to ensure our survival. These self-conceptions aren’t (fully) lies; these men genuinely believe they’re protagonists in a story about human progress. That conviction makes it nearly impossible for them to recognize that they might be the source of the contemporary plagues they claim to want to solve. When researchers present evidence that social media harms teenage mental health, Zuckerberg dismisses it as methodologically flawed. When critics point out that AI systems can be used for surveillance and manipulation, Altman insists that OpenAI’s commitment to safety makes those concerns obsolete. The possibility that they might be Oedipus at the crossroads — that they may be causing the very harms they’re trying to prevent because they haven’t fully understood what they are facing — doesn’t register as plausible. Our greatest virtues can be the seeds of our downfall when unchecked.
Tech builders pride themselves on exactly the qualities that doomed Oedipus. They’re problem-solvers who refuse to accept that any challenge is insurmountable. They’re determined optimists who believe that human ingenuity can overcome any obstacle. They’re convinced that more information, more data, more computing power will reveal solutions to problems that have plagued humanity for centuries. These are genuinely admirable traits in many contexts; you don’t build a company that serves billions of users or land a rocket booster on a drone ship in the ocean without intelligence and determination. But when confronted with what’s going wrong with what they’ve built, that same determination becomes dangerous. Instead of humility in the face of complexity, the response from tech founders and builders tends to be that the solution to problems created by technology is always more technology.
That’s where today’s tech founders fail to understand dystopian sci-fi. More technology can never solve what makes us human. The dystopia in Her isn’t caused by poor algorithm design. It’s caused by the fact that humans are lonely and will form attachments to anything that provides consistent emotional validation, even when they know intellectually that it’s not real.
Thus, the Fates that tech builders are trying to outsmart aren’t mystical forces, but the unchanging features of humanity that remain constant even as our technological capabilities advance. We’re tribal. We’re status-seeking. We’re prone to addiction and manipulation. We’re attracted to simple narratives even when reality is complex. We’ll sacrifice long-term wellbeing for short-term pleasure. We’re capable of remarkable cruelty when we can’t see our victims’ faces. These are all features of our permanent operating system, and part of what makes us human. Any technology that fails to account for them will produce predictable harm no matter how sophisticated the design.
The tech builders believe they’re different from Oedipus because they’re smarter, more careful, more committed to safety. But Oedipus was also smarter and more careful than those around him; that’s how he became king.
The tragedy of Oedipus, as it is for Altman, Musk, Zuckerberg, and other tech execs, is the consequences of failing to realize that intelligence and caution aren’t enough when you’re working with forces you don’t fully appreciate or understand.
See, e.g., the fact that Babylon Bee, the relatively joyless knock-off of The Onion, has a joke on the topic means that it’s not a very funny joke anymore, if it ever was.