Senate Defeats Cruz’s AI Preemption Plan 99-1

In a rare display of near-unanimity, the US Senate voted 99 to 1 on June 30, 2025, to strip out a provision that would have blocked states from enacting consumer protection laws regulating artificial intelligence for up to a decade. The lone dissent came from Sen. Thom Tillis (R-N.C.), while even Sen. Ted Cruz (R-Texas) ultimately sided with the amendment to remove his own proposal.
Background of the Proposal
Earlier this year, the House approved its budget reconciliation bill containing a controversial 10-year moratorium on state AI regulation. Cruz’s office touted the measure as necessary to prevent an “EU-style regulatory straitjacket” that would stifle innovation in generative models and autonomous systems.
Key elements of the proposal included:
- A ban on any state law targeting AI-driven robocalls, deepfakes, or autonomous vehicle safety standards.
- Using up to 100 percent of a $42 billion broadband deployment fund—known as the BEAD program—to penalize states that enact preempted AI rules.
- An alternate version reducing the penalty fund to $500 million in a separate AI research and deployment account.
Senate Floor Debate and Vote
During the debate, Sen. Maria Cantwell (D-Wash.) and Sen. Marsha Blackburn (R-Tenn.) led a bipartisan charge against federal overreach, arguing that state regulators have been first responders in filling a policy vacuum.
“We can’t just run over good state consumer protection laws,” Cantwell said, citing 24 states that enacted AI safeguards in 2024 alone. “This body has proven it cannot keep pace with emergent technology on its own.”
Sen. Ted Cruz initially defended the moratorium as a way to safeguard investment in AI startups and protect intellectual property rights of creative professionals. However, after widespread pushback—including from 17 Republican governors, 40 state attorneys general, the Heritage Foundation, and the Center for American Progress—he withdrew support for the provision.
Technical Breakdown of the Proposed Moratorium
The Senate text defined AI regulation broadly to include any state statute or rule that directly or indirectly applies to algorithms, models, or automated decision-making systems. Under the moratorium:
- States would have been barred from enacting laws targeting AI systems that use deep learning frameworks such as TensorFlow or PyTorch.
- Any legislation requiring transparency in model training data or imposing liability for unfair or deceptive AI practices (referencing Section 5 of the FTC Act) would be preempted.
- Broadband subsidy eligibility—for example, funds disbursed under the Infrastructure Investment and Jobs Act—would be jeopardized for noncompliance.
GAO experts warned that linking broadband deployment grants to AI policy enforcement could create a chilling effect on initiatives to close the digital divide. Former FCC Chair Ajit Pai noted that conflating infrastructure funding with AI policy would inject regulatory uncertainty into both sectors.
New Executive and Legislative Developments
Just days after the Senate vote, the White House released an executive order directing the Office of Management and Budget (OMB) to develop a coordinated AI governance framework that respects both federal authority and state experimentation. The order calls for:
- Model risk management guidelines for agencies, aligned with NIST’s AI Risk Management Framework 1.1.
- Collaboration with state attorneys general on consumer-facing AI products.
- Periodic interagency reviews of emerging AI risks in critical infrastructure.
Meanwhile, the House Judiciary Committee advanced the Artificial Intelligence Accountability Act, which would establish federal baseline standards for transparency, bias mitigation, and safety testing. If passed, the act could render moot many state regulatory experiments or set minimum requirements states must meet or exceed.
Implications for State-Federal Relations in AI Governance
The overwhelming vote preserves a patchwork of state laws shaping the AI landscape. States such as California and Illinois have already passed statutes mandating disclosure of synthetic media, while Texas implemented restrictions on facial recognition in public surveillance.
Legal scholars caution, however, that a fragmented regime can increase compliance costs for developers deploying models across multiple jurisdictions. Harvard Law Professor Susan Crawford commented:
“A healthy tension between state innovation and federal uniformity is essential. States serve as policy laboratories, but without eventual federal harmonization, companies may face legal uncertainty and consumers may not receive consistent protections.”
Expert Perspectives
- Rachael Goodman, Center for AI Safety: “State rules often address emergent harms more quickly than Congress can. Preempting that work risks leaving real victims without recourse.”
- Jack Clark, AI Policy Analyst: “We need a multi-level governance model. The Senate vote underscores the importance of bottom-up policy iteration that informs stronger federal laws.”
- FTC Commissioner Alvaro Bedoya (public statement): “Linking broadband funding to AI policy enforcement is a novel but dangerous tool. It weaponizes infrastructure grants in service of unrelated regulatory goals.”
Looking Ahead
With the moratorium provision excised, states are free to continue enacting targeted AI safeguards. Congress still faces pressure to codify baseline federal standards before the 2026 election cycle. Industry groups, including the Software Alliance (BSA) and the Information Technology Industry Council (ITI), have signaled support for a national AI bill that balances innovation incentives with consumer protections.
The Senate vote also sends a broader message: in the fast-evolving AI domain, bipartisan consensus can emerge to uphold the principle of federalism and prevent regulatory overreach, even when landmark federal legislation remains elusive.