OpenAI and Anthropic, prominent players in the AI sector, have entered into agreements with the U.S. government to permit the use of their AI models for research, testing, and evaluation purposes, as announced by the National Institute of Standards and Technology (NIST) on Thursday.
Under these agreements, the U.S. AI Safety Institute will have early access to the companies’ significant new models both before and after their public release, according to a NIST press release.
The agreements aim to enhance research into AI's capabilities and risks and identify effective safety measures. The AI Safety Institute will also provide feedback to OpenAI and Anthropic on potential safety enhancements.
Elizabeth Kelly, Director of the U.S. AI Safety Institute, emphasized the importance of safety in advancing AI technology. "These agreements mark a significant step forward in our collaborative efforts to advance AI safety science," Kelly stated. She expressed enthusiasm about starting technical collaborations with both companies to promote responsible AI development.
This development occurs amid heightened governmental and legislative scrutiny regarding AI model safety.
The AI Safety Institute, part of the Department of Commerce, was established last year following President Biden’s comprehensive executive order on AI safety, risk mitigation, and data privacy preservation. Elizabeth Kelly, who previously advised President Biden economically, leads the institute.
OpenAI, known for developing ChatGPT, is also a member of the AI Safety Institute Consortium, which includes other tech giants like Microsoft, Alphabet’s Google, Apple, and Meta Platforms, alongside various government and academic entities.