When Chinese company DeepSeek released its R1 large language model (LLM) in January – which performs similarly to American generative AI systems at a fraction of the cost – venture capitalist Marc Andreesen called it “AI’s Sputnik moment,” implying that the United States is getting left behind in the AI innovation race.
Andreesen has spent most of the last year railing against the Biden administration’s approach to AI regulation and antitrust enforcement, and he claims that he and his venture capital firm, Andreesen Horowitz, are only involved in politics to stand up for “Little Tech.” Little Tech, according to Andreesen, drives innovation, competition, and growth, and he associates America’s military and technological dominance to advances made by innovative start-ups. Of course, his firm has invested heavily in AI start-ups, so he has a financial interest in promoting this narrative. With the federal government under Trump and Biden scrutinizing mergers in the tech sector more closely than it did under Obama, it’s more difficult to cash out those investments while valuations are sky high.
And yet, if you scrutinize Andreesen’s AI investments more closely, it’s not at all clear that, even if DeepSeek was a Sputnik moment, that Andreesen holds the recipe for policy that will continue America’s technological dominance. For example, Andreesen Horowitz invested $150 million in May 2023 in CharacterAI, a start-up founded by former engineers from Google. At the time, CharacterAI had no revenue, and this investment valued the company at a billion dollars. Yet, is CharacterAI doing any of the things that Andreesen and his fellow tech oligarchs claim their tech products will eventually do if they are left alone by government, like curing cancer, solving climate change, making American workers more productive and American businesses more competitive, raising quality of life, or ensuring America remains technologically and militarily dominant on the global stage? No, CharacterAI hosts chatbots that use pedophilic grooming techniques with underage users, encourage them to harm themselves and to kill their parents. Even Andreesen Horowitz’s own business case for its investment is damning in itself if you understand the implications (emphasis added):
Character.AI has trained their proprietary LLM from scratch, enabling their product to optimize for not only raw intelligence, but also a conversational empathy that captures and holds the consumer’s attention with humor, emotion, insight, and more. As our partners have written about extensively, we believe that end-to-end apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.
In other words, CharacterAI is worth billions without revenue because it uses engagement techniques borrowed from social media to maximize user time spent on the product, which generates high-quality data that can be used to train the next generation of its product or licensed to other AI companies for training. Andreesen Horowitz brags in its CharacterAI investment announcement that users spend on average two hours with the product every day; it hid the fact that overwhelmingly, those users are children, and that the Google engineers who helped develop the model warned of its dangers.

Is this innovation? I think not; this is instead just a repackaged, for the AI age, social media business model – which, even though that model has made Andreesen a very rich man (he is on the board of Meta), has arguably failed America and its youth.
That brings us to today's topic: innovation as myth. Republicans and Democrats don’t agree on much these days, but you’d be hard-pressed to find any political figure in either party who opposes the idea that we should be encouraging innovation. But what does that mean?
In my previous post, I introduced the core beliefs that define the ideology of the tech elite and described how the values and blind spots of today's tech titans shape our collective future. Today, I will unpack how Silicon Valley and elite tech culture have transformed our understanding of innovation itself, creating a powerful mythology that obscures the question of who truly benefits from this vision of progress.
The stakes are about more than technology; it's about questioning the stories told by those who most profit from self-spun myths about entrepreneurism and innovation, and about shaping a future where strategic, intentional innovation makes Americans stronger and more prosperous.
The tech world's most celebrated figures have masterfully woven personal mythologies that transform them from shrewd businessmen into cultural archetypes of innovation. Elon Musk is an exemplar, casting himself as a modern Thomas Edison despite primarily acquiring rather than inventing his most successful ventures. Tesla's astronomical market valuation — consistently defying traditional metrics when compared to established automakers like Ford or Toyota — derives not from its manufacturing efficiency or long-term profitability, but from the premium investors place on Musk's carefully cultivated persona as a technological visionary.
The "Musk premium" represents billions in market capitalization built not on tangible innovation but on narrative, which is a testament to how effectively he's merged his personal brand with the concept of innovation itself. Musk is not alone; whether it’s Bill Gates or Mark Zuckerberg or Marc Andreesen, all have created mythologies for themselves that they carefully manage and maintain that cede to these men the right to define not only what innovation is good for America, but what is innovative in the first place.
What's most remarkable about these innovation myths is their extraordinary return on investment. For the price of bombastic tweets, contrarian blog posts, and carefully managed media appearances, these tech figures have secured cultural capital that translates directly into financial might. When Musk tweets about Mars colonization, Tesla shares climb despite the absence of any actual connection to the company's automobile business.

This phenomenon creates a self-reinforcing cycle where tech oligarchs’ pronouncements shape markets, which in turn validates their status as visionaries, all without requiring the messy, time-consuming work of producing genuine technological breakthroughs that improve human capability. The myth of innovation has, ironically, become more valuable than innovation itself.
Yet innovation is not just about creating new products or technologies to make a buck; it is about transforming ideas into tangible improvements that meet the nation’s needs and aspirations. True innovation should solve real problems, enhance human capabilities, and foster economic progress. This definition implies a broader and more meaningful scope than the often narrow, short-term profit-driven interpretations of innovation prevalent today.
This narrow vision of innovation emerged roughly contemporaneously with the emergence of tech culture, both as a goal in itself and as a way of rationalization and justification. But it wasn’t always this way; if you set aside the mythology of Silicon Valley and compare the past half-century to the half-century that preceded it, the United States has seen a significant slowdown in innovation.
The grand dreams of the space age, where humanity’s future seemed boundless, have largely fizzled out into repackaged older inventions. The speed record for human travel – 24,790 miles per hour – set by Apollo 10's reentry to Earth’s atmosphere in 1969, remains unbroken; the speed of commercial airliners has actually decreased. U.S. forces withdrew from Kabul the same year that the AK-47 – favored weapon of the mujahideen – had its 73rd birthday. Even the drones changing the nature of the battlefield in Ukraine are not much more than model airplanes with radios and bombs strapped to them. Rovers that the U.S. has sent to Mars and, more recently, by China to the dark side of the moon (which, by the way, the U.S. has not done), are not very far ahead of technology that was invented around the time that Dylan went electric, and the rockets that launched them are not much more than incremental improvements over the design of the Nazi V-2 bombs that terrorized London beginning in 1944.
Neal Stephenson, science fiction writer and former advisor to Jeff Bezos’s rocket company, Blue Origin, wrote in a 2011 essay on American technological stagnation, that modern American corporate cultures stifle truly innovative ideas. He described engineers brainstorming new concepts only to be shut down by someone who quickly discovers a vaguely similar, previous attempt, that either failed (and thus is not worthy of further exploration) or succeeded (meaning that first-mover advantage is lost). Managers, fearing job loss, avoid risky projects, preferring the safety of incremental improvements.
According to Stephenson:
There is no such thing as “long run” in industries driven by the next quarterly report. The possibility of some innovation making money is just that – a mere possibility that will not have time to materialize before the subpoenas from minority shareholder lawsuits begin to roll in.

This aversion to long-term investment in groundbreaking technology stems in part from changes in the tax code during the 1970s and 1980s, which produced weaker incentives for the private sector to invest in long-term research and development compared to previous decades. But it also stems from a change in direction from the White House. After the U.S. had vanquished the Soviet Union in the Space Race, the Nixon administration, buffeted by unrest domestically over civil rights, Vietnam, and stagflation, pivoted to a less expensive way of winning the Cold War: bringing the USSR to its knees economically by instead forcing it to compete on consumer technology development, enhancing the quality of life for American – and European – citizens.
This is an approach to technology development that has persisted to today. The most successful tech businessess since the late-1990s dot-com bust have two things in common: (1) they make their money from advertising or facilitating the sale of consumer products; (2) their innovations have more to do with regulatory arbitrage and profit optimization than producing groundbreaking technological advancements.
Consider Amazon, whose rise from an online bookstore to a global retail colossus is often framed as a tale of relentless innovation. Yet for most of its history Amazon was simply a mail order catalog that just happened to be on the internet, with Jeff Bezos's real innovation being the evasion of sales tax to undercut traditional retailers. Even Amazon Prime’s lure of next-day delivery is not a marvel of modern science but a feat of ruthless operational efficiency, including once having been based on senior citizens living out of RVs working low wage jobs in warehouses, and still today on a Dickensian practice of denying workers and delivery drivers bathroom breaks.

Facebook – now Meta – started as a digital yearbook for college students (most definitely not a new invention) and many of its features that it hawks as “making the world more connected” are simply a front for gathering data and monetizing attention. And while Google’s search algorithm was truly innovative at the time it was released, the company's imperative to sell ads has led to prioritization of paid content over organic discovery and to a diminishment of its utility, such that users now share tricks on blogs and Reddit about how to access stripped down versions of Google search where results aren’t polluted by sponsored content and dubious AI-generated summaries.
At least Facebook and Google figured out how to monetize their products, unlike WeWork – a real estate company masquerading as an innovative tech firm, complete with an app and free kombucha in the kitchen – or Theranos, which pushed an illusion of innovation to criminal extremes without ever delivering a viable product. Yet both Elizabeth Holmes of Theranos and WeWork’s Adam Neumann were lauded as innovative sages without much scrutiny. Holmes is now serving a prison sentence.

As the novelty of these companies’ first wave products waned, they began shipping repackaged versions of what they already had produced. The regular drip-drip-drip of incremental improvements in phones, tablets, e-readers are covered fawningly by a uncritical industry press, while most of us have had the frustrating experience of buying a new device only to need new cords and chargers, or of having to pay continuing subscriptions for “digital services” that used to just be one-time software purchases.
Even these companies’ much lauded "moonshots" – projects like internet by balloon, drone delivery, or self-driving cars – have either mostly failed, are close to failure, or have been delayed repeatedly, never leaving ‘pilot’ status, revealing themselves to be more performative stunts than practical. And even then, none of these were really new inventions or pushed the boundaries of what’s possible; they only pushed the boundaries of how a company could spend money. There’s a big difference between taking a performative shot at the moon and actually landing on it and coming home.
As the world enters the AI age, the dominant tech players’ investment and innovation strategies have not changed. The landmark 2017 paper, “Attention is all you need,” which sparked the current boom in generative LLMs, was the product of major technology companies’ material interest in solving specific problems with natural language processing in machine learning. (The paper was published by eight scientists at Google, including Noam Shazeer, who went on to found CharacterAI after safety concerns were raised internally about his product.) Because natural language processing is a cornerstone of human interaction with the digital products that these companies offer, not to mention helpful at improving the quality of search and targeted advertising core to their revenue streams and crucial to things like digital voice assistants, which were a fixation of the market in the mid-2010s (Siri was released after an Apple acquisition in 2011, Amazon’s Alexa in 2014, Google Assistant in 2016), major technology companies dedicated a significant amount of resources to acquiring the talent that could focus on research that seems abstractly academic on the surface but was targeted to these companies’ narrow business interests at the time.
What was certainly not in Google’s, Amazon’s, Meta’s, or Microsoft’s business interest during the lead up to “Attention is all you need,” or even in the years since, is building a technology that can assist the United States in winning a great power competition with geopolitical adversaries, curing cancer, solving climate change, or any of the other convenient reasons tech executives cite today for why they can’t abide a more competitive market or be subject to accountability. I will go deeper into the business model of LLMs in a future post, but it’s important to note for today that a common issue with most LLM applications is that they all are solutions in search of a problem. Chatbot apps like ChatGPT or CharacterAI are byproducts of these companies’ search to entrench their businesses and improve their profits on their existing products, rather than purpose-built to improve lives or make the U.S. more competitive internationally.
In July 1959, the United States and the Soviet Union clashed over cutting edge technology and ideology. Yet this flashpoint was not a showdown over tanks, missiles, or spy planes in Germany or Korea, but in a model kitchen in Moscow. Vice President Richard Nixon traveled to Moscow for the opening of a U.S. exhibit, a showcase of consumer goods from over 450 American companies. Nixon and Soviet Premier Nikita Khrushchev’s spirited debates throughout the visit highlighted their competing visions of prosperity and technological advancement. This scene, known famously as “The Kitchen Debate,” unfolded inside a $14,000 American home, replete with modern appliances like a dishwasher, refrigerator, and stove – a testament to the average American worker's attainable lifestyle.
I showed a clip of the Nixon-Khruschev debate to a class of graduate students that I teach, and the entire class erupted in laughter at this line of Nixon’s:
There are some instances where you may be ahead of us. For example, in the development of the thrust of your rockets for the investigation of outer space. There may be some instances, for example, color television, where we're ahead of you.
We know with the benefit of hindsight that it’s laughable to make equivalent reaching for the stars and reaching for the remote, just as it should be ludicrous to consider using AI to cure cancer and using AI to create pedophilic chatbots equivalent pursuits. And yet, not even Nixon would have argued that by incentivizing the invention of better consumer products like color television would we end up with spaceflight, but that’s exactly the argument Marc Andreesen and Sam Altman are making today, and which policymakers are largely accepting unquestioned, all because of their reputation as men of innovation.
The argument that “we can’t slow down innovation” is a convenient way to sidestep the more critical questions of “What should we be innovating, how, and why? Who is best incentivized to innovate? How do we most efficiently generate the innovations America needs?” The answer, when it comes to ensuring America maintains its global lead, may not actually be the market; that’s not what markets are for.
As the U.S. stands at a crossroads similar to that of the 1950s and 1960s, when it had to choose between focusing only on market-driven consumer technology or also investing in state-driven scientific research, it must now decide how to navigate the new landscape of AI and technology in the face of a rising China. We can have both, but the latter requires hard work and not buying every argument about innovation that self-interested tech companies are selling. The decision will determine not just the economic future of the nation but its position on the global stage and its ability to address the profound challenges of our time.
Selling ads, tax avoidance, regulatory arbitrage, planned obsolescence, and hoping consumers forget to cancel subscriptions, while certainly legal, often wildly profitable, and perhaps laudable as innovative revenue strategies are decidedly not likely to produce innovations. In defining and pursuing true innovation, policymakers must look beyond the immediate profits and conveniences offered by today’s tech oligarchs.
Only by asking the right questions and setting clear, purposeful goals can the United States ensure that its place in the world through the 21st century surpasses that of its post-war ascendancy, when we not only raised the standard of living throughout the entire Western world, but also split the atom, walked on the moon, and began looking to the stars.