U.S. Announces Director of New AI Safety Body

Created time
Feb 7, 2024 10:34 PM
notion image
February 7, 2024 5:00 AM EST
Elizabeth Kelly, formerly an economic policy adviser to President Joe Biden, has been named as director of the newly formed U.S. Artificial Intelligence Safety Institute (USAISI), U.S. Commerce Secretary Gina Raimondo announced Wednesday.
“For the United States to lead the world in the development of safe, responsible AI, we need the brightest minds at the helm,” said Raimondo. “Thanks to President Biden’s leadership, we’re in a position of power to meet the challenges posed by AI, while fostering America’s greatest strength: innovation.”
Kelly has previously contributed to the Biden Administration’s efforts to regulate AI with the AI Executive Order, which an Administration official tells TIME she was involved in the development of from the beginning.
Kelly was “a driving force behind the domestic components of the AI executive order, spearheading efforts to promote competition, protect privacy, and support workers and consumers, and helped lead Administration engagement with allies and partners on AI governance,” according to a press release announcing her appointment.
Previously, Kelly was special assistant to the President for economic policy at the White House National Economic Council. She served on the Biden transition and the Obama administration, and previously worked in banking.
The USAISI, which was set up by the White House late last year to develop AI safety tests for use by regulators, is just one of many ways that policymakers in the U.S. and around the world, alarmed by AI’s rapid progress, have tried to mitigate the risks posed by AI. In the U.S., the Biden Administration’s AI Executive Order sought to tackle a range of issues related to AI, including AI’s impact on civil rights and the lack of government uptake of AI. But it also requires AI companies developing the largest and most powerful AI models to report the results of any safety tests they carry out. AI policy issues remain a priority for the President and his team, according to an Administration official.
Vice President Kamala Harris announced the creation of the USAISI in November as part of her visit to the U.K. for the first global AI Safety Summit. Speaking at the U.S. embassy in London, Harris said at the time that the USAIS would “create rigorous standards to test the safety of AI models for public use.”
The U.K. announced the establishment of its own AI safety institute one day later. Speaking at the AI Safety Summit, British Prime Minister Rishi Sunak said that “until now the only people testing the safety of new AI models have been the very companies developing it,” and that the newly formed U.S. and U.K. AI safety testers would remedy this by providing a “public sector capability to test the most advanced frontier models.”
Japan also announced that it would establish its own AI safety institute in December 2023.
“The Safety Institute’s ambitious mandate to develop guidelines, evaluate models, and pursue fundamental research will be vital to addressing the risks and seizing the opportunities of AI,” said Kelly, in a press release announcing her appointment. “I am thrilled to work with the talented NIST team and the broader AI community to advance our scientific understanding and foster AI safety. While our first priority will be executing the tasks assigned to NIST in President Biden’s executive order, I look forward to building the Institute as a long-term asset for the country and the world.”
Elham Tabassi, previously associate director for emerging technologies in National Institute of Standards and Technology’s (NIST) Information Technology Laboratory, will serve as USAISI chief technology officer. (The USAISI was set up within NIST.) Kelly will provide executive leadership and management of the institute, while Tabassi will lead the institute’s technical programs, according to the press release.
Earlier this month in Brussels, lawmakers agreed on the final text of the E.U. AI Act, the bloc’s landmark AI regulation. Again, the rules seek to address a large number of AI-related issues, and include a safety testing requirement for the most powerful AI models.