OpenAI and Anthropic, two AI startups, have signed deals with the United States government for research, testing, and evaluation of their artificial intelligence models. These agreements come at a time when the companies are under regulatory scrutiny for the safe and ethical use of AI technologies. California legislators are set to vote on a bill regulating the development and deployment of AI in the state. The U.S. AI Safety Institute will have access to new models from both OpenAI and Anthropic, enabling collaborative research to evaluate capabilities and associated risks.
The agreements between OpenAI, Anthropic, and the U.S. AI Safety Institute aim to define U.S. leadership in developing AI responsibly. Jason Kwon, chief strategy officer at OpenAI, believes the institute plays a critical role in shaping the future of AI development globally. Additionally, the U.S. AI Safety Institute will collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements. These agreements mark an important milestone in ensuring responsible AI stewardship for the future.
Anthropic, a company backed by Amazon and Alphabet, is part of the agreements but has not commented on the deal yet. Elizabeth Kelly, director of the U.S. AI Safety Institute, views these agreements as the beginning of a collaborative effort to ensure the responsible use of AI. The institute, operating under the U.S. commerce department’s National Institute of Standards and Technology (NIST), was established as part of an executive order by President Biden’s administration to evaluate risks associated with AI models.
The agreements with OpenAI and Anthropic will allow the U.S. AI Safety Institute to assess the safety and ethical implications of AI technologies. The institute will offer feedback to the companies and collaborate on research to mitigate risks associated with their AI models. These partnerships aim to set a standard for responsible AI development that can be adopted globally. The U.S. AI Safety Institute sees collaboration with the U.K. AI Safety Institute as an opportunity to enhance safety measures and promote ethical practices in AI development.
In conclusion, the partnerships between OpenAI, Anthropic, and the U.S. AI Safety Institute represent a significant step towards safeguarding the future of AI technology. By working together to evaluate the risks and capabilities of AI models, these companies are demonstrating their commitment to responsible AI development. The collaboration between the U.S. and U.K. AI Safety Institutes further strengthens these efforts and highlights the importance of global cooperation in ensuring the safe and ethical use of artificial intelligence. These agreements mark a milestone in shaping the future of AI development and setting a standard for responsible stewardship of this powerful technology.