Elizabeth Kelly, Director of the U.S. AI Safety Institute, detailed the nation’s ambitious plans to govern and harness artificial intelligence during a recent webinar. She stressed the importance of balancing innovation with safety in a fast-moving field.

“We’re building evaluations to test AI models before deployment,” Kelly said. “This creates new government capacity to directly assess frontier models.”

In a recently posted YouTube video by the American Bar Association (ABA) event, Kelly highlighted the Biden Administration’s efforts to manage AI risks while fostering development. She played a key role in crafting last year’s White House Executive Order on AI, which she called “the most extensive federal policy directive” on the technology.

“The Executive Order directs us to pull every lever at the federal government’s disposal to harness AI’s power for good while protecting people from its profound risks,” Kelly said.

The 100-page directive builds on previous steps like the AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. These initiatives aim to guide AI developers in managing risks and ensuring transparency.

Kelly acknowledged the Executive Order’s aggressive timelines. By April, NIST had released four major draft documents for public comment, including the Generative AI Profile. This document addresses risks posed by generative AI and offers risk management strategies.

The public comment period closed in June. Kelly said feedback, including input from UC Berkeley, was “extremely valuable” and that final versions are expected this summer.

Kelly also highlighted the roles of the U.S. AI Safety Institute, a NIST initiative focused on advancing AI safety. The Institute operates under three pillars: testing AI models, issuing guidance, and conducting technical research.

The Institute’s Consortium includes over 280 organizations, ranging from tech developers to civil society groups. Its working groups tackle issues like generative AI risk management and synthetic content evaluation. Kelly emphasized international partnerships as crucial to AI governance. She cited collaborations with the UK’s AI Safety Institute and dialogues with the EU on its AI Act.

“AI is a global technology that requires global solutions,” she said. “We need to align safety policies and create interoperable standards.”

Looking Ahead
Kelly encouraged attendees to stay informed on AI advancements and global regulations. She pointed to the EU’s sweeping AI Act, which takes effect next year, as a potential game-changer for multinational companies.

“Beneficial AI depends on safety,” Kelly concluded. “And AI safety depends on rigorous science.”