The future of artificial intelligence regulation in the United States is at a critical juncture, with a growing battle between tech industry proponents of a national standard and state governments seeking to protect their residents from potential harms. While numerous states have introduced bills to address AI risks, including California’s SB-53 and Texas’s Responsible AI Governance Act, efforts are underway in Washington to preempt these laws, creating uncertainty for both innovators and consumers.
This push for federal control comes as lawmakers grapple with balancing the rapid advancement of AI technology with the need for responsible development and deployment. A key point of contention is whether a uniform national framework is necessary to foster innovation or if allowing states to experiment with different approaches is more effective.
The Fight for AI Regulation: States’ Rights vs. National Standards
For the first time, the U.S. is seriously considering how to regulate artificial intelligence. The debate isn’t centered on whether regulation is needed, but rather on who should be responsible for creating and enforcing the rules. The absence of a comprehensive federal AI standard has prompted a wave of state-level legislation, aiming to protect consumers from issues like bias, disinformation, and job displacement caused by AI systems.
Tech companies, alongside many AI startups, argue that a fractured regulatory landscape will stifle innovation and create logistical nightmares. Josh Vlasto, co-founder of the pro-AI political action committee Leading the Future, stated that state-level regulations could hinder American competitiveness in the global AI race, particularly with China. This argument resonates with some in Washington who favor a lighter touch and industry self-regulation.
However, this industry-favored approach faces stiff opposition. Lawmakers express concern that without federal standards, blocking state efforts could leave the public vulnerable to unchecked AI power. Critics argue that waiting for a comprehensive federal law could take years, leaving consumers exposed in the meantime.
Preemption Efforts in Congress and the White House
Currently, two major avenues are being explored to achieve federal preemption of state AI laws. The House of Representatives is considering attaching language to the National Defense Authorization Act (NDAA) that would effectively block states from enacting their own AI legislation. Negotiators are reportedly attempting to narrow the scope, potentially preserving state authority over areas like child safety and transparency, acknowledging the sensitive nature of these issues.
Simultaneously, a draft Executive Order (EO) from the White House reveals a similar strategy. The leaked document proposes establishing an “AI Litigation Task Force” to challenge unfavorable state laws in court, directing agencies to scrutinize and potentially invalidate “onerous” state regulations, and pushing the Federal Communications Commission and Federal Trade Commission towards creating nationally binding AI standards. The EO would also grant significant influence to David Sacks, President Trump’s AI and Crypto Czar.
Industry Funding and Opposition to Regulation
The push against state-level AI regulation is backed by substantial financial resources. Pro-AI PACs such as Leading the Future, which has raised over $100 million, are actively funding campaigns to oppose candidates who support stricter AI controls. Build American AI, the advocacy arm of Leading the Future, supports full preemption even without accompanying federal consumer protections, arguing that existing legal frameworks are adequate.
Nathan Leamer, Executive Director of Build American AI, suggests focusing on resolving AI-related harms through legal challenges *after* they occur rather than proactively preventing them. This approach contrasts with the preventative measures often found in state legislation.
The States’ Perspective and the Role of Innovation
State lawmakers pushing for regulation argue they are responding to immediate constituent concerns and capable of adapting more quickly to evolving AI risks. Alex Bores, a New York Assembly member and congressional candidate who sponsored the RAISE Act requiring safety plans for large AI labs, believes trustworthy AI development will become a market advantage. He emphasizes that a national standard shouldn’t come at the expense of state-level innovation and consumer protection.
Currently, 38 states have enacted over 100 AI-related laws this year, primarily focusing on areas like deepfakes, transparency requirements, and responsible government use of machine learning. Despite concerns about a “patchwork” of regulations, it’s worth noting that the vast majority of these state laws don’t directly impose requirements on AI developers.
What’s Next for AI Regulation?
Representative Ted Lieu and the bipartisan House AI Task Force are working on a comprehensive federal AI bill that addresses issues ranging from fraud to healthcare and child safety. However, passing such a sweeping bill will be a lengthy and complex process, potentially taking months or even years. The immediate future hinges on whether preemption language will be included in the final version of the NDAA, and whether the White House EO will be revived and enacted. Further complicating matters, increasing numbers of lawmakers—and state attorneys general—are voicing public opposition to federal preemption, prioritizing states’ rights to address emerging risks within their jurisdictions.
The next few weeks will be crucial as Congress aims to finalize the NDAA before Thanksgiving. The outcome will signal the direction of AI policy in the U.S. and determine the balance of power between federal and state governments in regulating this transformative technology.

