NIST: Generative AI Profile (AI 600-1) and AI Risk Management Framework (AI RMF)

We contributed our input to The National Institute of Standards and Technology (NIST)’s frameworks: “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (NIST AI 600-1).

NIST AI 600-1 is a companion resource to the NIST AI Risk Management Framework (AI RMF), specifically focused on generative AI (GAI). It aims to help organizations identify, manage, and mitigate the unique risks associated with GAI, such as cybersecurity vulnerabilities, misinformation, and ethical concerns. The document outlines 12 key risks and provides over 200 actions that developers can take to address them, mapped to the AI RMF. 

Related

NIST AI Risk Management Framework (AI RMF 1.0)

The NIST AI Risk Management Framework (AI RMF 1.0), released January 26, 2023, provides voluntary guidance for incorporating trustworthiness considerations into AI system design, development, use, and evaluation. Developed through consensus-driven processes involving over 240 contributing organizations from private industry, academia, civil society, and government across an 18-month period, the framework establishes systematic approaches for managing AI-associated risks to individuals, organizations, and society.

The framework operates through four core functions addressing distinct phases of AI lifecycle management. The GOVERN function applies across all organizational AI risk management processes, establishing policies, procedures, and organizational structures for AI governance. MAP, MEASURE, and MANAGE functions target system-specific contexts and lifecycle stages, providing operational guidance for risk identification, assessment, and mitigation strategies.


Core Framework Structure:

  • GOVERN: Organizational policies and procedures for AI risk management

  • MAP: Risk identification and context establishment for AI systems

  • MEASURE: Performance evaluation and trustworthiness assessment methodologies

  • MANAGE: Risk prioritization and mitigation implementation strategies

The framework incorporates flexibility mechanisms enabling adaptation to organizational scale, use cases, and risk profiles, utilizing soft power approaches rather than regulatory mandates. NIST plans formal review by 2028, potentially producing Version 2.0, with semi-annual updates incorporating community feedback through iterative development processes.

Generative AI Profile (NIST AI 600-1)

Released July 26, 2024, NIST AI 600-1 serves as a cross-sectoral companion resource to the AI RMF, specifically addressing generative artificial intelligence systems pursuant to President Biden's Executive Order 14110. The guidance centers on a list of 12 risks and over200 suggested actions that developers and other AI actors can take to manage them, providing specialized guidance for organizations managing generative AI technologies across all industry sectors.

The 12 risks include a lowered barrier to entry for cybersecurity attacks, the production of mis- and disinformation or hate speech and other harmful content, and generative AI systems producing inaccurate or misleading outputs—a phenomenon referred to as “confabulation.” The profile addresses technical, operational, and societal concerns ranging from cybersecurity vulnerabilities to content authenticity challenges, with actions mapped directly to the AI RMF framework structure.


Key Generative AI Risk Categories:

  • Cybersecurity attack barriers and vulnerabilities

  • Confabulation (hallucinations) producing false or misleading content

  • Dangerous, violent, or hateful content generation and exposure

  • Data privacy impacts through unauthorized disclosure or de-anonymization

  • Harmful bias and homogenization from non-representative training data

  • Human-AI configuration challenges affecting system reliability

  • Intellectual property violations and unauthorized content reproduction

  • Information integrity compromises affecting decision-making processes

  • Obscene, degrading, or abusive content generation risks

  • Environmental sustainability impacts from computational resource consumption

  • Value chain integration vulnerabilities across development and deployment phases

  • Information security gaps in model protection and data handling

The profile emphasizes risk evaluation methodologies, continuous testing requirements, and multi-stakeholder feedback integration throughout development and deployment processes. After describing each risk, the document presents a matrix of suggested actions that AI actors can take to mitigate it, mapped to the AI RMF.

Related NIST Security Frameworks

NIST SP 800-53 Security Controls Integration

NIST SP 800-53 Revision 5 provides comprehensive security and privacy controls for federal information systems, organizing safeguards into 20 control families across low, moderate, and high impact levels. The framework addresses access control, audit and accountability, configuration management, incident response, and system communication protection requirements applicable to AI system security implementations.

AI system implementations require integration with established cybersecurity controls addressing unique characteristics of machine learning systems. Control families encompass personnel security, physical and environmental protection, planning strategies, program management, risk assessment, and security assessment authorization processes. Organizations deploying AI systems must align algorithmic risk management with traditional information security control structures.

Secure Software Development Framework Extensions

NIST Special Publication 800-218A augments the Secure Software Development Framework (SSDF) specifically for AI model development, addressing unique challenges including malicious training data impacts and blurred boundaries between system code and data. The companion resource supports Executive Order 14110 requirements for secure generative AI and dual-use foundation model development practices.

The framework recommends organizational, model-level, and software programming security measures, establishing robust vulnerability assessment and response systems addressing training data integrity and model security throughout development lifecycles.

Implementation Metrics and Standards Integration

The generative AI profile delivers 211 specific actions across 12 risk categories, enabling customized implementation based on organizational requirements and risk tolerance. The AI RMF Playbook provides actionable guidance updated approximately twice per year through community feedback, supporting framework adoption across diverse operational contexts with comments reviewed and integrated on a semi-annual basis.

Recent developments include the U.S. AI Safety Institute Consortium’s work on voluntary reporting approaches, with key findings on how organizations can share risk management data through a Voluntary Reporting Template (VRT) for the NIST AI Risk Management Framework Generative AI Profile. The consortium comprises more than 290 member companies and organizations working across five key areas including generative AI risk management, synthetic content, evaluations, red-teaming, and model safety and security.

NIST coordinates alignment with international standards including ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC 42001, and ISO/IEC 42005 through federal AI standards coordination. Future development priorities target human-AI configuration, explainability methodologies, and framework effectiveness measurement approaches, encompassing case study development and sector-specific implementation resources.

The NIST AI framework ecosystem establishes systematic risk management infrastructure across organizational contexts, providing scalable technology governance approaches while maintaining innovation-supportive implementation flexibility.

• • •

References

¹ National Institute of Standards and Technology. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." NIST AI 100-1. January 26, 2023.

² National Institute of Standards and Technology. "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (AI 600-1)." NIST AI 600-1. July 26, 2024.

³ National Institute of Standards and Technology. "Security and Privacy Controls for Information Systems and Organizations." NIST Special Publication 800-53, Revision 5. September 2020.

⁴ National Institute of Standards and Technology. "Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities." NIST Special Publication 800-218A. February 2024.

⁵ National Institute of Standards and Technology. "AI RMF Playbook." 2024.

⁶ U.S. AI Safety Institute Consortium. "Voluntary Reporting Template for the NIST AI Risk Management Framework Generative AI Profile." 2025.

⁷ International Organization for Standardization. "ISO/IEC 42001: Information technology — Artificial intelligence — Management system." 2023.

⁸ International Organization for Standardization. "ISO/IEC 22989: Information technology — Artificial intelligence — Concepts and terminology." 2022.

⁹ International Organization for Standardization. "ISO/IEC 24028: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence." 2021.