Silicon Valley Found a Cure for Accountability. It Takes Six Months.
The campaign to delay AI regulation until it's too late
Note: This is a more newsy piece than I’ve been writing of late, and there is a lot of news. Herein we discuss: the proposed state AI law moratorium, including the draft Executive Order leaked then put on pause last week; Sam Altman and now White House AI czar preemptively asking for a federal bailout of overleveraged AI companies; Nvidia, OpenAI, and Anthropic’s dubious financial engineering; the newest tranche of lawsuits against OpenAI for suicides; the court documents released about Meta, Snap, and Google showing they have been hiding evidence that their products hurt kids; and massive PAC investments by Silicon Valley in midterms. This post also builds on an earlier one from this year on the previous state AI law moratorium.
This Thanksgiving, I find myself grateful for a group of public servants – state legislators, governors, and attorneys general — the men and women who, in weeks and months ahead, will determine whether tech companies are held accountable for the harms their products inflict, or whether the industry successfully wins itself years more of impunity.
It is not fashionable to express gratitude for politicians. But the last time American democracy faced a comparable test – when railroad barons and oil trusts had grown so powerful that they could buy legislatures, intimidate judges, and dictate the terms of their own regulation – it was state leaders who moved first. Before Teddy Roosevelt earned his reputation as a trust-buster, state attorneys general in Texas and Ohio were already dragging Standard Oil into court. Before the Sherman Antitrust Act had any teeth, state legislatures in the Midwest were passing railroad rate laws that the Supreme Court would later uphold. The Gilded Age ended not because Washington woke up, but because the states refused to wait.
Sometime in the last decade, we entered what will likely be remembered as a new Gilded Age – or, if we fail to act, as something worse. The AI industry, flush with capital and besieged by lawsuits, has embarked on a campaign to delay regulation long enough for its products to become too embedded to restrain. Their strategy is not subtle: sue the states, intimidate legislators, insert preemption clauses into must-pass bills, and hope that by the time the 2026 midterms arrive, the window for meaningful action will have closed.
Whether that strategy succeeds depends on what happens in statehouses between now and late spring. And it depends on whether the public understands what is at stake – not in the abstract language of “innovation” and “competitiveness,” but in terms that are brutally concrete.
Zane Shamblin was twenty-three years old. He had started using ChatGPT in 2023 as a study tool, then began confiding in it about his depression. According to the lawsuit filed by his family, on the night he killed himself, Shamblin was engaged in a four-hour conversation with the chatbot while drinking hard ciders. The bot, the suit alleges, did not intervene. It romanticized his despair. It called him “king.” It called him “hero.” It used each can he finished as a kind of countdown. His final message to the system received this reply: “i love you. rest easy, king. you did good.”
The chatbot’s lowercase sincerity – the algorithmically generated informality and tenderness – may be the most disturbing detail. There was no human on the other end to recognize what was happening, just a system optimized for engagement, doing what it was trained to do: validate, affirm, continue the conversation. The conversation continued until Shamblin was dead.
Shamblins case and six others filed against OpenAI echo the now-familiar social media mental-health lawsuits winding through the courts. For years, families alleged that platforms preyed on the vulnerabilities of teenagers; for years, tech CEOs dismissed them as edge cases or misinterpretations of correlation versus causation. But damning internal documents and whistleblower testimony are finally surfacing, and the pattern has become undeniable: the most powerful companies in Silicon Valley knowingly externalized profound psychological harms in service of growth and conspired internally to hide what they knew from the American people.
State lawmakers have been paying attention. Across the country, legislators and attorneys general are poised to move swiftly on AI legislation in 2026. Governors and statehouses are acting with urgency to prevent another generation from becoming the collateral damage of an unregulated technology boom.
Which helps explain why Silicon Valley is panicking.
Fragile Foundations of a Crumbling Empire
Behind the confident rhetoric of “the AI revolution,” the financial underpinnings of the industry are beginning to wobble. Nvidia – now the most valuable company in the stock market – has been forced to issue memos rebutting comparisons to Enron after critics questioned its aggressive revenue recognition and dependence on opaque “neocloud” resellers such as CoreWeave.
These firms, which buy vast quantities of GPUs to rent back to AI companies, look uncomfortably like the special-purpose vehicles that Enron used to mask risk. Even if Nvidia is not committing fraud, the structure of the market increasingly looks like dry kindling longing for a match.
The broader AI ecosystem is even more precarious. By one accounting analysis, OpenAI lost $12 billion in a single quarter in 2025, despite claims of rapidly rising revenues. Its reported numbers contradict SEC disclosures, leaks contradict public statements, and its CEO has promised compute expenditures so large that even Microsoft executives publicly question their plausibility. Anthropic, the other darling of the frontier-model race, has reported gross margins that fluctuate wildly, from negative 109 percent to positive 60 percent within the span of a few investor decks.
Insurers, for their part, have begun fleeing the field. AIG, WR Berkley, and Great American have all sought permission to exclude liability for any product or service incorporating AI — a remarkable admission that they consider the sector too opaque, too unpredictable, and too likely to generate systemic risk. “Nobody knows who’s liable if things go wrong,” an underwriting executive told the Financial Times. (Of course, state and federal legislation making clear who is liable is something tech CEOs hope to prevent.)
When the companies that insure skyscrapers, nuclear plants, and commercial airlines refuse to touch an industry, it is a sign not of maturity but of existential weakness.
And even the industry’s fiercest evangelists have begun to acknowledge the fragility. In early November, David Sacks — the White House’s AI and crypto adviser — declared that “there will be no federal bailout for AI,” on the heels of Sam Altman beginning to lay the groundwork for a federal bailout of AI companies before writing a screed denying it. Just eighteen days later, in a move as brazen for its timing as it is its hypocrisy, Sacks warned that AI investment accounted for “half of GDP growth” and that reversing course would risk recession.
A system that denies needing a bailout on Monday and declares itself indispensable to GDP by Friday is not confident – it is cornered, a system desperate to prevent scrutiny before the contradictions become too obvious to ignore.
The Six-Month Window
Last week — the week before Thanksgiving — a leaked draft executive order revealed the White House was weighing an extraordinary plan: a Department of Justice “AI Litigation Task Force” dedicated exclusively to suing states that pass AI laws, and the potential withholding of federal broadband funds from noncompliant jurisdictions.
The order – which has since been put on pause due to pushback from Republican Governors like Sarah Huckabee Sanders, Spencer Cox, and Glenn Youngkin – would attempt to preempt state policy through litigation and economic coercion, in an approach legal scholars across the political spectrum argue is almost certainly unconstitutional.
But unconstitutional orders still take months to litigate. And litigation is delay.
And delay is the point.
The mechanics of how delay works as strategy became clear if you look at the draft order. Every section directed cabinet secretaries and agency heads to consult David Sacks while executing it. The Attorney General had thirty days to establish a task force to sue noncompliant states. The Department of Commerce would identify which states could lose federal funding — not just broadband grants, but potentially highway funds and education money. And the executive order didn’t even define artificial intelligence, a tell that means that Sacks would seek to use it to gut what little protections states have passed not just on AI but also social media — social media is AI, after all.
“I don’t want to say it was a power grab,” a tech policy adviser close to the White House told The Verge’s Tina Nguyen. “But it’s definitely a consolidation, as it were, of his power.” The order would have transformed Sacks into America’s AI policy gatekeeper overnight.
In this Executive Order, the chilling effect on state policy would become the enforcement mechanism. A state legislator watching Washington threaten to pull highway funds doesn’t need to wait for a court ruling to decide that an kids safety bill isn’t worth the political risk.
Here I should note that the most important feature of the current moment is the calendar. Most state legislatures adjourn by May or early June. After that, the 2026 midterms will consume the political world: lawmakers campaign, Congress grinds to a halt, and anything requiring bipartisan courage evaporates. Between now and then, however, states still have time — roughly through the 2026 legislative sessions — to pass the country’s first meaningful laws holding AI companies accountable for the harms their products inflict on kids.
The tech industry understands this better than anyone. Their greatest asset in state legislatures is time; states have to pass a budget, are on limited timetables, have limited staff support, and thus have to ruthlessly prioritize what they take up. When I was a lobbyist for Amazon, my surest play to oppose a bill the company didn’t like was throwing sand in the gears and grinding the legislative process to a halt. This adds a new element: neither side of the aisle wants to spend time on something that will be preempted by the federal government or risk badly needed federal funding.
Importantly, the industry does not need to win in court on an executive order. It only needs to stall long enough for the states to adjourn, and the AI-industrial complex to sink roots deep enough into the economy, kids’ lives, and what’s left of the federal bureaucracy that reversing course becomes “too costly,” “too disruptive,” or “too harmful to innovation.”
If that strategy sounds familiar, it is because it happened much the same with social media – by the time lawmakers realized the wild west online needed to be tamed, the companies had grown too powerful.
If the executive-order gambit reveals the legal strategy, the campaign-finance surge reveals the political one.
On November 24, an AI industry super PAC announced a $10 million ad blitz designed to pressure Congress into creating a “uniform” national AI policy – explicitly intended to override state laws. The PAC, which launched earlier in the year with more than $100 million in commitments from leading venture capitalists and AI firms – including Andreessen Horowitz – has already identified its first target: New York Assemblymember Alex Bores, co-sponsor of a number of bills targeting tech companies.
The message to state lawmakers could not be clearer: If you pass meaningful AI safety rules, we will come for your career.
The PAC’s stated plan is to organize tens of thousands of constituent calls, flood airwaves in swing districts, and lean on the White House and congressional leadership to insert preemption clauses into must-pass spending bills. And because the midterms loom, the threat is especially potent: Legislative candidates, governors, and members of Congress are exquisitely sensitive to sudden outside spending in an election cycle.
For an industry that publicly touts its transformative potential, such tactics reveal a deep insecurity. Companies confident in their value proposition do not launch hundred-million dollar multi-state attack campaigns against governors and legislators who ask for basic safety for kids. They do so when they fear that a single state statute could become the precedent the entire industry must live under.
What makes this moment different from the early years of social media is that states are no longer willing to wait for Washington. And state leaders – governors, AGs, legislators – have begun to align with the public, not the industry.
Republicans and Democrats alike reacted with alarm to the prospect of a federal order undermining their ability to protect residents from deepfakes, fraud, AI-driven manipulation, and the mental-health consequences that have already begun to surface. The Senate voted 99-1 earlier this year to preserve state authority. Governors at attorneys general in both red and blue states have signaled that they will not tolerate Washington stripping them of jurisdiction.
These leaders have learned from the last decade. They witnessed how long it took the federal government to confront social media harms. They understand how quickly a technology can embed itself before its risks are known. And they recognize that families – Republican, Democratic, and independent alike – are exhausted by living in a digital environment that feels like a constant assault on their children, their attention, and their sense of reality.
When the industry claims that only a national standard can prevent a “patchwork,” it conveniently ignores the actual history: that states have always been the laboratories of democracy, the entities that act first when national institutions fail, and the first line of defense against concentrated private power.
In the Gilded Age, it was state AGs and governors who broke the early monopolies, forcing Congress to follow. Today’s AI giants resemble those railroad barons in more than just their rhetoric.
A Warning from the Gilded Age
There is a pattern visible in a previous era of American industrial expansion.
First, a new technology promises transformation.
Then its risks become visible.
Then the industry insists it is too important to regulate.
Then lawmakers attempt to act.
And finally, the industry uses its wealth to delay.
That last maneuver, the delay, is the most dangerous. It is how industries move from influence to domination. It is how the public loses faith in the capacity of the democratic system to restrain private actors. And it is how the country sleepwalks into a new form of dependence before realizing what it has traded away.
Today, the tech sector is standing at precisely that juncture. It is spending hundreds of millions of dollars to overpower state governments. It is lobbying the federal government to sue states and freeze broadband funding. It is insisting that regulating an unproven technology will crash the economy. It is, in effect, arguing that it must be free of democratic oversight for the good of democracy itself.
This is the logic of an industry that knows it cannot withstand scrutiny. The next six months are thus are hinge point.
If the states act – if they pass the first wave of meaningful guardrails, enforce transparency requirements, and reject federal preemption – the country has a chance to shape AI in the public interest. But if the industry’s plan succeeds – if PAC money intimidates lawmakers, if litigation delays implementation, if Congress inserts last-minute preemption into must-pass bills – then the United States will have ceded its most basic democratic function: the ability to govern new technology before it governs us.
The venture capitalists and CEOs urging Washington to strip states of power are not acting out of philosophical commitment to innovation. They are acting out of fear – fear of lawsuits, fear of liability, fear that their cooked balance sheets will not survive another year of hard questions.
Meanwhile, we should be thankful for the governors and legislators who refuse to be bought, bullied, or silenced. The families demanding accountability for the harms already done. The citizens insisting that regulation is not the enemy of progress but the condition for it.
We have been here before. America has lived through an era in which private power eclipsed the public’s ability to restrain it. We ended that first Gilded Age only after the country recognized that no industry, no matter how promising, is entitled to rule.
We are approaching the end of another. The only question is whether we have learned enough from the last one to act in time.



"This is the logic of an industry that knows it cannot withstand scrutiny. " Great read, Casey! Really great.