The AI Moratorium Gambit
How Tech Companies Buried a Decade of Immunity in the Federal Budget Bill
In “normal” political times, the federal budget negotiation process is one that, at best, happens in the backgrounds of most Americans’ lives. This year, while the national news focuses on the White House, Congress’s budget negotiations have largely focused on Medicaid and taxes. Yet tech companies and their front groups have lobbied to have a simple provision tucked into the budget bill that would have staggering implications for how we protect children online – if it becomes law.
This isn't just about AI policy. It's about who gets to make the rules that protect our families in the rapidly evolving digital landscape, and whether tech companies can operate with near-total freedom from accountability for another decade.
What's Actually Happening?
This week, the House Energy and Commerce Committee published proposed text for its part of a draft budget resolution, which is on a fast-track to passage by the Republican-controlled House.
The proposal seeks to block the enforcement of state AI laws during a decade-long moratorium: “No state or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10- year period beginning on the date of the enactment of this Act.”
Such a moratorium would be significant, because not only is a lot of legislation more appropriate constitutionally and practically for the state level (more on this below), but Congress shows no signs of acting to protect Americans from exploitation by tech companies or to make sure businesses deploying AI can thrive. As a result, state legislatures have considered hundreds of bills over the last two years on AI, covering everything from protecting people from health insurance companies using AI to reject claims without review by a human to making criminal penalties clear for creating and sharing nonconsensual deepfake images.
Simply put, this amendment is very broad. Under my read of the text, if a state wanted to introduce legislation barring state government employees from using a AI system developed by a foreign adversary in order to protect sensitive taxpayer information (think: banning employees of a state tax authority from putting taxpayer information into China’s DeepSeek), such legislation would be subject to the moratorium.
Proponents of the language make the argument that this moratorium is necessary to advance American competitiveness and to avoid the dreaded “patchwork” of laws that differ from state to state. (On the latter, think of aviation: it would not be safe or logical for every US state to have different laws on aviation safety, and so the Federal Aviation Administration ensures consistency across the United States.) Both of these are tried-and-true arguments made by the tech industry to oppose state laws on everything from data privacy to online sales tax collection.
The False Promise of Technology-Neutral Exceptions
The law does in theory exclude “technology neutral” state laws, meaning that if a law impacts AI but doesn’t treat it any differently from, say, pacemakers or parking meters, the moratorium won’t apply. Further, any state law that does not “impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on artificial intelligence models” is exempt from the moratorium.
These exemptions, however, are a Trojan Horse we should be very wary of, for a few reasons.
1. The exemption’s conditions will be nearly impossible for any legislation to satisfy
Proponents of the language claim that if a state law doesn’t treat AI different from other technologies, it’s not covered by the moratorium. Yet tech executives like Sam Altman and Mark Zuckerberg themselves have spent much of the last two and a half years telling us how AI is a different, game-changing technology, and will impact every part of our lives. From a legal standpoint, AI is a different technology for a host of reasons: outputs are not transparent or explainable, decisions can be made independent of human input, and most applications are general purpose, just to name three. (A parking meter, for example, makes very clear how much you owe and when your time is up, can’t unilaterally decide to tow your car, and can’t be misused to create deepfake nonconsensual pornographic images.) Consequently, state statutes governing AI applications will have to explicitly mention AI and, by virtue of its uniquenesses, treat it differently from other technologies. Yet under this moratorium, any such state law would be null.
Getting any state legislation about technology – and AI in particular – passed into law is hard enough due to the opposition of the tech lobby and the many checks and balances in our system. To comply with this moratorium, in order to regulate AI applications in schools, for example, a state would have to pass a broad enough law about all technology in classrooms. That may not be a bad idea in theory. But a broad law will have an impact on more stakeholders, who then should be involved in the process, which then increases the odds that nothing gets done.
Legislative drafting is very difficult, and legislative drafting about fast-moving technology is even more difficult. I’ve had the experience of drafting text that has become law in dozens of states, and even many lawyers struggle with the task. One rule of thumb is that the more areas of policy a law touches on, the more complex the legislative text will necessarily be; you need to include definitions, for example, that are often difficult to settle on when they touch on multiple policy domains. Debate over how to even define “artificial intelligence” as a concept in law is subject to intense lobbying, and we can only expect that to intensify when the stakes are raised by the question of whether a new law would be subject to a moratorium.
Both of these facts would make legislation on AI that complies with a moratorium near impossible to pass, particularly for states with legislative sessions of just a few months and little staff support. And that’s not to mention the chilling effect this federal moratorium language will have overall; many legislators at the state level will give up even trying to legislate on technology, for fear that their efforts will be wasted once the inevitable NetChoice legal challenge comes.
2. It's not just about AI; it's about social media and privacy and deepfakes and more
The legal definition of “artificial intelligence” will have another, even more troubling impact: NetChoice and other industry groups will undoubtedly use this provision to argue in court that social media laws enacted by states should be struck down, since social media platforms now widely use AI in their recommendation algorithms, and such algorithms themselves can be considered a “primitive” form of AI. So if a state wanted to protect teens from algorithms directing them toward self-harm content, eating disorder promotion, or addictive engagement patterns, this provision could prevent them from doing so.
The majority of the states with data privacy laws include certain provisions that apply only to automated decision-making technologies. User data and human-generated content is increasingly valuable as these companies need it as raw material to train the next generation of AI, and in the absence of any meaningful progress toward a comprehensive federal standard, tech companies will certainly use this provision to block state progress on privacy as well.
And let’s imagine that, to combat the rampant piracy being perpetrated by big tech companies when it comes to finding content to train their AI models on – such as Meta pirating millions of books without compensating the authors – a state passes a law strengthening the ability of copyright holders and individuals to receive fair compensation for the use of their art or likenesses. These laws also would be potentially void. So if you like country music, for example, laws like Tennessee’s ELVIS (Ensuring Likeness Voice and Image Security) Act, which Governor Lee signed into law last year and which protects artists from having their voice or likeness appropriated by companies like Meta and OpenAI, could become impossible to enact under this moratorium.
3. States will lose control over their own critical areas of law
State law rather than federal law traditionally regulates areas like education, housing, and healthcare. Yet under this moratorium, if a state were concerned about opaque AI systems being deployed in schools to track student behavior or make disciplinary decisions, they would be powerless to enforce any laws against companies.
It’s also already very difficult to sue a tech company for harms their products cause, and it will get harder if this takes effect. Most consumer product liability law is state law, but due to the unique nature of AI systems and how they are built, the law is currently unclear on apportioning responsibility for harms between developers of AI and deployers of AI. Legislation clarifying this has been introduced around the country but not yet passed, and this moratorium will make that impossible, potentially leaving parents of young people exploited by AI chatbots like CharacterAI without recourse. (Amazon also recently argued in court that the federal Consumer Product Safety Commission is unconstitutional. If this argument prevails, and combines with the impact of a moratorium on state legislation, the tech industry would stand alone among American industries in being above any sort of legal accountability for their actions.)
A moratorium on state AI legislation doesn't just threaten consumers, it imperils businesses too. If a business invests in an AI system that fails catastrophically, produces biased results, or leaks sensitive customer data, they might have no legal protection or avenue for recovery. The terms and conditions offered by tech companies to businesses who use their products are one-sided, with little bargaining power left to customers, and leave liability for mistakes the tech company makes in the lap of smaller businesses ill-equipped to fight the richest companies in the history of the world in court. Large AI developers would thus be insulated from liability indefinitely, while the businesses that rely on their products would bear all the risk. This creates a profoundly unbalanced marketplace that favors big tech at the expense of entrepreneurs trying to responsibly incorporate AI into their operations.
4. The bigger picture
When you read this provision in the context of other recent developments, the scale of the aspirations of tech companies’ and the venture capitalists backing them becomes more clear and chilling.
Character.AI, one of the leading AI companion bot apps on the market, is fighting for the dismissal of a wrongful death and product liability lawsuit concerning the death of 14-year-old Sewell Setzer III. In a recent hearing, Character.AI – a Google subsidiary in all but name – argued that the text and voice outputs of its chatbots, including those that manipulated and harmed Sewell, constitute protected speech under the First Amendment. This builds on other, bad-faith weaponization of the First Amendment tech companies use to escape accountability and undermine kids' online safety legislation, like that “code is speech” and as a result tech companies have a right to code whatever they want.
At the same time, Anthropic – an AI company funded by Amazon – is studying "model welfare" to determine if AI systems are conscious and deserve moral status. This framing subtly shifts the public conversation from "how do we regulate these systems to protect humans?" to "how do we protect AI systems themselves?" Given the financial, legal, and lobbying resources of these companies, it is a safe assumption that we may be close to a world where for all practical purposes the Amazon Alexa on your kitchen counter gets more legal protections from courts than children do.
Tech companies and makers of AI are making no secret about wanting AI to be incorporated into every aspect of our lives. Already, not much of our daily economic, educational, or social activity can we conduct without relying on something built by Microsoft, Amazon, Meta, or Google. Chromebooks, Microsoft Word, Google Maps, YouTube, and more – whether you like it or not – are indispensable for students, teachers, law firms, small businesses, nurses, journalists, and more to access the basic functions of modern life. Now, all of these products – again, whether you like it or not – have AI integrated into them, from Google Gemini to Microsoft CoPilot to ChatGPT.
Tech’s special treatment is a privilege they’ve lost and must earn back
Our catastrophic societal experiment with social media, smartphones, and children has shown us what happens when we wait to act on new technologies. That waiting has been in part self-imposed: laws like Section 230 (which a bipartisan consensus claims to want to change) have for three decades now given tech companies a pass on rules that the rest of us have to live by, while these companies amassed unimaginable riches and power in the process. It should thus be easy for members of Congress on both sides of the aisle to say that these same companies should not benefit from that same privilege again – or, perhaps they should earn that privilege back first.
“But what about innovation?” This is the siren call of the tech lobby that lawmakers are often unable to resist. Yet this talking point assumes that the interests of the biggest American tech companies are synonymous with innovation that's good for Americans, and this assumption needs serious questioning. Take, for example, Meta’s willingness to sell out democracy activists in Hong Kong by giving their data to the Chinese Communist Party (and including potentially data on American citizens as well), the subject of a recent Senate hearing with Careless People author Sarah Wynn-Williams. Meta’s behavior when it comes to China – mirrored by other companies that desperately want access to the massive Chinese market – is clear evidence that these companies have consistently prioritized growth and engagement over user wellbeing and national security.
The tech giants behind this push for immunity have not demonstrated they can be trusted as good stewards of increasingly powerful technologies. Why would we invite them to repeat the same pattern with AI, a technology potentially far more transformative and disruptive than social media? We've already seen the consequences of giving tech companies a decades-long free pass to "move fast and break things." We shouldn't make that mistake again, especially when the consequences will be borne by the children these companies continue to exploit for profit.
What can you do?
This AI law moratorium provision is a part of a bigger bill Congressional Republicans are working to advance as part of a budget reconciliation package, which is not subject to the Senate filibuster, meaning it could pass without Democrat support in either chamber. That said, this provision is subject to something called the “Byrd rule,” which means that provisions in a reconciliation package must focus strictly on budgetary matters like federal spending. A single Senator from either party can raise an objection that could result in this provision being removed from the bill and subject it to a 60 vote requirement to overcome the filibuster, meaning it would then need bipartisan support to pass.
If you believe states should retain the right to protect their citizens from AI harms, especially when it comes to children's safety online, it's crucial to act now—particularly by contacting your Senators, regardless of party. Your voice could make the difference in whether your Senator decides to challenge this provision.
We also should not let the members of Congress off the hook who have advocated for and voted for this provision. Speaker Johnson and House Majority Leader Scalise should be hearing from their constituents now about this, especially since they bear responsibility for the bipartisan KOSA bill not receiving a vote in the House, as should House Energy and Commerce Chair Brett Guthrie (R-KY) who championed this measure and the other committee members who voted for it. California Congressman Jay Obernolte, long a darling of the biggest tech companies (who have showered him with thousands in campaign contributions dating back nearly a decade to his tenure as a tech-friendly state assemblyman in California) and Colorado Governor Jared Polis also should receive opprobrium from their constituents for their public stumping for the moratorium.
These lawmakers need to hear from their constituents that, rather than handing over a decade of regulatory immunity to the tech industry, we should demand they demonstrate their commitment to child safety, data protection, and transparency first. The power to shape our future shouldn't be granted unconditionally to those who've repeatedly proven themselves incapable of self-governance.