New York has become the second state in the U.S. to enact significant AI regulation, with Governor Kathy Hochul signing the RAISE Act into law. The legislation, passed by state lawmakers in June after some industry pushback, mandates transparency and safety reporting from developers of large artificial intelligence models. This move positions New York alongside California in establishing a framework for overseeing the fast-evolving AI landscape and signals a growing demand for AI governance beyond the federal level.
The signing of the RAISE Act occurred after lobbying efforts from the tech industry resulted in proposed amendments to scale back the bill. However, Governor Hochul ultimately agreed to sign the original version, with lawmakers indicating they will revisit potential changes next year, according to the New York Times. This compromise allows the immediate implementation of crucial safety measures while leaving room for further discussion and refinement.
What the New York AI Regulation Entails
The RAISE Act centers on increasing accountability for companies building and deploying powerful AI systems. Specifically, it requires developers of “high-risk” AI models – those deemed to pose potential harm to the state’s population – to publicly disclose details regarding their safety evaluations and procedures. These developers must also report any significant safety incidents to the New York Department of Financial Services within 72 hours of discovery.
A new office within the Department of Financial Services will be established to monitor AI development and ensure compliance with the new regulations. This office will review submitted reports and assess the risks associated with AI technologies operating within the state.
Failure to adhere to the reporting requirements or submission of false statements could result in substantial financial penalties. Companies found in violation face fines of up to $1 million for the first offense, increasing to $3 million for subsequent violations. This financial deterrent aims to encourage proactive safety measures and honest reporting.
Federal and State Conflicts in AI Oversight
New York’s action follows a similar move by California, where Governor Gavin Newsom signed an AI safety bill in September. Governor Hochul noted this alignment, stating the legislation creates a “unified benchmark” for AI safety among leading tech states. This is particularly relevant as the federal government has yet to implement comprehensive AI regulations.
However, the development of state-level AI laws hasn’t been without opposition. President Donald Trump recently signed an executive order directing federal agencies to challenge these state regulations, asserting federal authority over the area. The order is supported by David Sacks, Trump’s AI czar, and represents an attempt to preempt stricter state rules on artificial intelligence.
This federal challenge creates a potential legal conflict, as states argue for their right to protect their citizens from the potential harm of unchecked AI development. The outcome of any legal battles will likely shape the future of AI regulation in the U.S. and establish the division of power between federal and state authorities.
Industry Responses to the New AI Law
Reactions from the tech industry have been mixed. OpenAI and Anthropic, prominent AI developers, have publicly expressed support for the New York legislation. However, they simultaneously advocate for a national, federal approach to AI regulation, arguing that a consistent framework is necessary for innovation, according to reports.
Anthropic’s Head of External Affairs, Sarah Heck, emphasized the importance of the state-level movements, stating they “signal the critical importance of safety and should inspire Congress” to act.
Conversely, some within the industry are actively opposing the new laws. A super PAC funded by Andreessen Horowitz and OpenAI president Greg Brockman is reportedly targeting Assemblyman Alex Bores, a co-sponsor of the RAISE Act, for a future election challenge. Bores acknowledged the challenge, noting the directness of the opposition.
These varying responses highlight the ongoing debate surrounding the appropriate level of oversight for increasingly sophisticated artificial intelligence systems. Concerns range from stifling innovation to ensuring public safety and mitigating potential biases inherent in machine learning models.
The implementation of the RAISE Act is expected to be closely watched by other states considering similar legislation. The legislation’s success—or any challenges encountered—will likely inform future discussions about responsible AI governance. The next key step will be the establishment of the new office within the Department of Financial Services and the development of detailed reporting guidelines. The timing of these developments, and the potential for legal challenges from the federal government, remain uncertain.

