President Donald Trump signed an executive order on Thursday aimed at establishing a national policy for artificial intelligence (AI), directing federal agencies to challenge state-level AI regulations. The move, proponents say, is intended to alleviate regulatory burdens on AI startups, while critics argue it will create further legal uncertainty and potentially hinder consumer protection. The order comes after previous attempts to pause state AI rules in Congress stalled, leaving a fragmented regulatory landscape.
The executive order instructs the Department of Justice to form a task force within 30 days to contest state AI laws, asserting that AI constitutes interstate commerce and therefore falls under federal jurisdiction. Additionally, the Commerce Department has 90 days to identify state AI laws deemed “onerous,” a designation that could impact a state’s access to federal funding, including crucial broadband infrastructure grants.
The Push for Federal AI Regulation
The core of the order centers on a desire for a unified national framework for governing artificial intelligence. It tasks the Federal Trade Commission (FTC) and the Federal Communications Commission (FCC) with developing federal standards that could supersede existing state regulations. Ultimately, the administration hopes to collaborate with Congress to enact a comprehensive, nationwide AI law, according to the White House.
This action follows a period of increasing state-level activity in AI regulation. Several states have begun enacting laws addressing issues like algorithmic bias, data privacy, and the responsible use of AI technologies. Lawmakers on both sides of the aisle have expressed concern that a patchwork of state laws could stifle innovation and create compliance challenges for companies operating across state lines.
Concerns from Advocacy Groups
The executive order has drawn criticism from organizations advocating for responsible AI development. Michael Kleinman, head of U.S. Policy at the Future of Life Institute, characterized the order as a “gift for Silicon Valley oligarchs,” arguing it prioritizes the interests of large tech companies over accountability and consumer safety. He suggested the move is designed to shield these companies from stricter state oversight.
David Sacks, Trump’s AI and crypto policy advisor, has been a key advocate for preempting state AI laws. His influence is evident in the order’s focus on reducing regulatory hurdles for AI developers.
Impact on Startups and Innovation
While the goal is to foster innovation, many in the startup community fear the opposite effect. Andrew Gamino-Cheong, CTO and co-founder of AI governance company Trustible, believes the executive order will ultimately hinder AI innovation. He explained that larger companies have the resources to navigate legal challenges, but smaller startups will struggle with the resulting uncertainty.
This uncertainty, experts say, extends to sales cycles and costs. Legal ambiguity makes it more difficult to sell AI solutions to risk-averse industries like finance, healthcare, and law, increasing the time and expense required to close deals and secure necessary insurance. This could slow down the adoption of AI technologies overall.
Hart Brown, principal author of Oklahoma’s AI task force recommendations, highlighted the challenges startups face in building robust regulatory governance programs. These programs are often expensive and time-consuming, particularly for companies operating in a rapidly evolving regulatory environment. The order could prolong this period of uncertainty, forcing startups to dedicate resources to compliance rather than innovation.
Arul Nigam, co-founder of Circuit Breaker Labs, echoed these concerns, questioning whether startups will be left to self-regulate in the absence of clear federal guidelines. He noted the need for open-source standards and a more definitive path forward for AI development.
Potential for Legal Battles
Legal experts anticipate significant challenges to the executive order. Sean Fitzpatrick, CEO of LexisNexis North America, U.K., and Ireland, predicts that states will vigorously defend their authority to regulate AI within their borders, potentially leading to cases that reach the Supreme Court. This legal battle could take years to resolve, leaving the regulatory landscape in flux.
Gary Kibel, a partner at Davis + Gilbert, acknowledged that businesses would benefit from a single national standard, but cautioned that an executive order may not be the appropriate mechanism to override existing state laws. He warned of the potential for extreme outcomes – either overly restrictive regulations or a complete lack of oversight – both of which could disadvantage startups and stifle innovation.
Meanwhile, The App Association urged Congress to act swiftly and create a “comprehensive, targeted, and risk-based national AI framework,” arguing that a prolonged court fight would be detrimental to the industry. They emphasized the need for a clear and consistent regulatory approach.
The next steps involve the Department of Justice forming its task force and the Commerce Department identifying “onerous” state AI laws. The outcome of these initial actions, and the subsequent response from states and Congress, will be crucial in shaping the future of artificial intelligence regulation in the United States. The potential for protracted legal challenges and the ongoing debate over federal versus state authority remain significant uncertainties.

