NIST Seeks Comments on Draft AI Guidance Documents, Announces Launch of New Program to Evaluate and Measure GenAI Technologies

The National Institute of Standards and Technology (NIST) has released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems. All are part of the agency’s response to Executive Order 14110 on the Safe, Secure and Trustworthy Development of AI. Comments on each draft are requested by June 2, 2024. NIST has also launched a challenge series that will support development of methods to distinguish between content produced by humans and content produced by AI.

The publications cover varied aspects of AI technology: The first two are guidance documents designed to help manage the risks of generative AI — the technology that enables chatbots and text-based image and video creation tools — and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF), respectively. A third offers approaches for promoting transparency in digital content, which AI can generate or alter; the fourth proposes a plan for global engagement for development of AI standards.

  • NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
  • NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
  • NIST AI 100-4, Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency
  • NIST AI 100-5, A Plan for Global Engagement on AI Standards

Drafts of NIST AI 600-1, NIST AI 100-5 and NIST AI 100-4 are available for review and comment on the NIST Artificial Intelligence Resource Center website; and the draft of NIST SP 800-218A is available for review and comment on the NIST Computer Security Resource Center website.

NIST GenAI Challenge

In addition to the four documents, NIST is also announcing NIST GenAI Challenge, a new program to evaluate and measure generative AI technologies. The program is part of NIST’s response to the Executive Order, and its efforts will help inform the work of the U.S. AI Safety Institute at NIST.

The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies. These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content. One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording. Registration opens in May for participation in the pilot evaluation, which will seek to understand how human-produced content differs from synthetic content. More information about the challenge and how to register can be found on the NIST GenAI website.

Read News Release