Understanding the NIST AI RMF: What It Is and How to Put It Into Practice

  • May 21, 2024
Author

Emily Bonnie

Senior Content Marketing Manager

Reviewer

Fortuna Gyeltsen

Product Team

From automating manual tasks to advanced threat response, artificial intelligence is not just an addition to the cybersecurity toolkit — it's quickly becoming the backbone of a new era in digital defense. But while AI brings significant opportunities for efficiency, it also introduces a range of risks for organizations to navigate.

In response to the threats and opportunities posed by AI technologies, the National Institute of Standards and Technology (NIST) drafted the Artificial Intelligence Risk Management Framework (AI RMF). The framework is meant to assist organizations in designing, developing, deploying, and using artificial intelligence systems responsibly and ethically.

In this article, we’ll walk through the different aspects of the NIST AI RMF, its companion resources the NIST AI RMF Playbook and NIST AI RMF Generative AI Profile, and explain how organizations can apply the framework to guide their use of AI systems.

Understanding the NIST AI Risk Management Framework

The risks posed by AI systems are unlike the risks posed by traditional software systems. Not only are these risks unique and complex, but they can have far-reaching and significant consequences for individuals, organizations, communities, and civil society. 

For example, AI models can be trained on data sets that change over time, affecting functionality in ways that are difficult to understand. And because AI systems so often require or play off of human input, they’re also influenced by human behavior and social dynamics.

AI system functionality can be influenced significantly by factors like:

  • How is the system used?
  • Does it interact with other AI systems?
  • Who operates it?
  • What’s the social or organizational context in which it’s deployed?

Proper understanding of these risks can help organizations fully understand and account for the limitations of AI technologies, improving performance, trustworthiness, and the likelihood that AI will be used in beneficial ways. The NIST AI RMF offers a structured way for organizations to gain that proper understanding of risk.

First published in January 2023, NIST AI RMF 1.0 is a voluntary framework to help organizations across industries and sectors design, develop, deploy, and use AI systems responsibly and ethically. It offers guidelines for the development and deployment of AI technologies so that society can benefit from the many opportunities presented by AI while protecting against potential harm.

Benefits of implementing the NIST AI RMF

Many aspects of artificial intelligence are in flux, from the technologies and algorithms themselves to the ways organizations and individuals perceive and use them. NIST acknowledges that the AI RMF is a new framework, and its practical effectiveness is still largely yet to be assessed.

That said, the framework is the result of years of collaboration among industry experts, and organizations that implement the AI RMF are expected to see a range of benefits. 

  • Improved processes for AI governance, including mapping, measuring, and managing AI risk
  • An organizational culture that better understands AI system risks and their potential impact, as well as prioritizes the identification and management of AI risks
  • Improved awareness of the relationships between AI trustworthiness characteristics, socio-technical approaches, and AI risks
  • Defined processes for deciding when to commission/deploy AI systems — and when not to
  • Defined processes, policies, and practices for improving organizational accountability regarding AI system risks
  • Enhanced information sharing within and across organizations regarding AI risks
  • Improved contextual knowledge and awareness of downstream risks
  • Stronger engagement and community among “AI actors” (NIST’s catchall term for the individuals and organizations involved in or impacted by AI systems, such as developers, deployers, end users, affected third parties, and regulators/policymakers)
  • Better capacity for the testing, evaluation, verification, and validation (TEVV) of AI systems and risks

Defining the characteristics of trustworthy AI

The purpose of the NIST Artificial Intelligence Risk Management Framework is to help organizations develop and deploy “trustworthy AI systems” — AI that is ethical, reliable, and aligns with societal values and norms.

According to the NIST AI RMF, trustworthy AI has the following characteristics:

  • Valid and reliable: AI systems should perform consistently under a variety of conditions and be resilient to attacks, failures, and other disruptions. They should maintain their functionality and provide dependable outputs even in changing or challenging environments.
  • Safe, secure, and resilient: AI systems must incorporate robust security measures to prevent unauthorized access and manipulation. They should be safe for use, minimizing risks to individuals and communities and ensuring that they do not cause unintended harm.
  • Accountable and transparent: AI systems must have mechanisms in place that ensure accountability for the outcomes of AI decisions. This includes clear documentation of decision-making processes and identifiable parties responsible for the design, deployment, and operation of AI systems.
  • Explainable and interpretable: There should be transparency in how AI systems operate, with explainability of decisions built into the process. Stakeholders should be able to understand how and why an AI system has reached a particular decision, allowing for easier assessment and trust in the system.
  • Privacy-enhanced: AI systems must be designed to respect user privacy, implementing data protection measures that prevent misuse of personal information. They should comply with relevant data protection laws and privacy regulations.
  • Fair, with harmful bias managed: Efforts should be made to minimize biases in AI decision-making processes. AI systems should be designed and monitored to ensure fair treatment of all individuals, without discrimination.

These characteristics align with the broader principles of ethical AI and are designed to enhance trust among users, developers, deployers, and the broader public affected by AI systems. By incorporating these characteristics, organizations can ensure that their AI systems are not only effective and efficient but also developed with a focus on human welfare, augmenting rather than replacing human decision-making.

The NIST AI RMF Core Functions

Developing and using AI systems responsibly requires proper controls to mitigate and manage undesirable outcomes. The core of the framework describes four functions designed to help organizations address the risks of AI systems: govern, map, measure, and manage.

Govern

This core function focuses on establishing and maintaining governance structures that ensure effective oversight and accountability for AI systems. It includes defining clear roles and responsibilities, setting objectives for AI governance, and ensuring compliance with relevant laws and regulations. It emphasizes the importance of ethical considerations and stakeholder engagement in the governance process.

For example:

  • Does the organization understand its legal and regulatory obligations regarding AI? 
  • Does the organization take steps to ensure the characteristics of trustworthy AI are integrated into those policies and processes? 
  • Are there formal risk management and risk assessment processes in place, with defined roles and responsibilities? Are they periodically reviewed and updated? 
  • How are risks documented and communicated?
  • Are AI systems inventoried, and is there a process for decommissioning AI systems safely? 

Map

The mapping core function involves understanding and assessing the AI ecosystem, including data sources, AI models, and the environment in which the AI operates. It requires identifying and documenting the flow of data, the functionality of AI systems, and their interactions with users and other systems. This function is crucial for assessing potential risks and vulnerabilities in AI operations.

For example:

  • Does the organization fully understand how the AI system will be deployed by various users? What are the potential positive and negative impacts of the system? 
  • Are socio-technical implications taken into account when addressing AI risks?
  • Does the organization have a defined mission and goals for AI technology, and are they documented and communicated? 
  • Does the organization have a defined and documented AI risk tolerance? 
  • Are there defined and documented processes for human oversight?

Measure

This function aims to assess the performance of AI systems and their compliance with defined objectives and requirements. It involves establishing metrics and benchmarks for AI performance, security, and reliability. Measurement also includes monitoring the ongoing operation of AI systems to detect and respond to deviations from expected performance or behavior.

For example:

  • How is AI system performance measured? What test sets or metrics are used? Do those metrics include feedback from end users and impacted communities?
  • Is the AI system regularly evaluated for safety risks? How is performance monitored?
  • Is the AI model explained, validated, and documented so that it can be used responsibly?
  • Are risks to privacy, fairness and bias, and the environment documented? 
  • Are there processes and personnel in place to identify and track existing, new, and/or unanticipated AI risks? 

Manage

The management function is about implementing strategies to address risks identified in the mapping and measuring functions. It includes developing risk response plans, implementing controls to mitigate identified risks, and ensuring continuous improvement of AI systems through updates and refinements. This function also covers incident response and recovery strategies to handle potential AI failures or breaches.

For example:

  • Are documented AI risks prioritized based on impact, likelihood, and available resources?
  • Are there documented risk responses to high-priority AI risks (i.e., mitigating, transferring, avoiding, or accepting)? Are unmitigated downstream risks documented and communicated?
  • Are there mechanisms in place to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use? 
  • Are incidents and errors communicated to relevant AI actors and affected communities?

Under the AI RMF, risk management is not a one-time occurrence but an ongoing process that’s performed throughout the AI system lifecycle. These four functions are designed to be adaptable and applicable across different types of organizations and AI applications, providing a structured, holistic, and ongoing approach to managing AI risks.

NIST released a roadmap that outlines future activities for evaluating the effectiveness of the AI RMF, stressing that improving and iterating on the framework is a collaborative process that requires ongoing feedback from organizations as well as AI actors and stakeholders.

The AI framework is expected to be formally reviewed and revised no later than 2028.

Guiding Your Organization's AI Strategy and Implementation

Follow these best practices to effectively implement AI while addressing concerns related to transparency, privacy, and security.

The NIST approach to building strong AI governance

The NIST AI RMF places a significant emphasis on AI governance as a foundational element for managing the deployment and operation of AI systems. The following activities are recommended to establish and maintain strong AI governance practices:

Overall, the NIST AI RMF underscores the idea that effective AI governance is dynamic, evolving as new challenges and insights arise in the field of AI.

Putting it into practice: the NIST AI RMF Playbook

The NIST AI RMF Playbook is a supplemental resource designed to help organizations navigate the AI RMF and put it into practice. It’s not a checklist or a set of hard requirements but rather suggested actions organizations can take to guide their implementation of the framework and help them get the most out of it.

The Playbook is broken into sections for each of the four core functions and their subcategories.

For example, the section on Govern 1.1 gives a detailed explanation of the principle, “Legal and regulatory requirements involving AI are understood, managed, and documented.” It provides deeper context into why this principle is important and its challenges, such as complex regulatory requirements, disparate AI system testing processes for bias, and differing user experiences (such as users with disabilities or impairments that could affect their use of the AI system).

The Playbook then offers a series of Suggested Actions, such as establishing specific roles for monitoring the legal and regulatory landscape, aligning risk management efforts with applicable requirements, and conducting ongoing training to ensure personnel stay up-to-date with regulations that could impact the design, development, and deployment of AI.

In addition to Suggested Actions, the Playbook offers guidance on specific documentation organizations can maintain and shares applicable resources. For example:

  • Has the organization identified and clearly documented all the legal rules and minimum standards it needs to follow?
  • Has there been a thorough check to ensure that the system meets all the relevant laws, regulations, standards, and guidelines?

In total, the Playbook includes over 140 pages of guidance and additional resources to help organizations put the AI RMF into practice. The AI RMF document and the Playbook are both part of the NIST Trustworthy and Responsible AI Resource Center. 

The NIST AI RMF Generative AI Profile

In addition to the Playbook, NIST released the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile in April 2024 as a companion document to the AI RMF. It helps organizations understand the unique risks associated with generative AI technologies (GAI) and offers recommended actions to mitigate those risks.

The NIST Generative AI Public Working Group (GAI PWG) identified 12 key risks of generative AI:

  1. CBRN Information: Easier access to dangerous information about chemical, biological, radiological, or nuclear weapons.
  2. Confabulation: The creation of confidently stated but incorrect or false information.
  3. Dangerous or Violent Recommendations: Easier creation and access to content that encourages violence, radical views, self-harm, or illegal activities.
  4. Data Privacy: Risks of leaking sensitive information like biometric, health, or personal data.
  5. Environmental: High use of resources in training AI models that could harm the environment.
  6. Human-AI Configuration: Problems arising from how humans and AI systems work together, such as relying too much on AI, misaligned goals, or deceptive AI behaviors.
  7. Information Integrity: Easier generation and spread of unverified content that might be used for misinformation or disinformation campaigns.
  8. Information Security: Lowered barriers for cyberattacks, such as hacking and malware, through AI.
  9. Intellectual Property: Easier creation and use of content that may infringe on copyrights or trademarks.
  10. Obscene, Degrading, and/or Abusive Content: Easier creation and access to harmful or abusive imagery, including illegal content.
  11. Toxicity, Bias, and Homogenization: Challenges in controlling exposure to harmful or biased content, and issues with data diversity affecting AI performance.
  12. Value Chain and Component Integration: Issues with non-transparent integration of third-party components in AI systems, including data quality and supplier vetting problems.

The Generative AI Profile document outlines specific actions organizations can take to manage these GAI risks, organized by the AI RMF core functions. Each action is assigned an action ID number corresponding to the specific function and subfunction. For example:

GV-1.1-001 is the first action that corresponds with Govern 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented. The action is: Align GAI use with applicable laws and policies, including those related to data privacy and the use, publication, or distribution of licensed, patented, trademarked, copyrighted, or trade secret material.

GV-1.2-001 is the first action that corresponds with Govern 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. The action is: Connect new GAI policies, procedures, and processes to existing models, data, and IT governance and to legal, compliance, and risk functions.

The Generative AI Profile encompasses 50+ pages of 467 unique actions, making it a comprehensive companion to the AI RMF specifically for managing risks associated with GAI.

A faster, easier way to implement NIST AI RMF

Secureframe helps organizations achieve compliance with NIST AI RMF by providing tools and templates tailored to the framework.

  • 200+ integrations to your existing tech stack automate evidence collection against specific NIST AI RMF controls and tests
  • Policy and process templates, developed and verified by in-house experts and former auditors, tailored to the specifics of NIST AI RMF 
  • Continuous monitoring for real-time alerts of failing cloud tests, ensuring your systems consistently adhere to the requirements and controls associated with the NIST AI RMF
  • Easily identify and manage AI risks that might impact your compliance with NIST AI RMF. 

To learn more about implementing NIST AI RMF with Secureframe, reach out to schedule a demo with one of our experts.

Use trust to accelerate growth

Request a demoangle-right
cta-bg

FAQs

What is NIST?

NIST stands for the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce.

NIST's primary function is to conduct research and establish standards that foster innovation and improve security and performance in various industries, including technology, engineering, and manufacturing. It plays a key role in areas like cybersecurity standards, measurement science, and developing standards that support new technologies. NIST is also well-known for its work on the definition of basic units of measurement and for maintaining the standards that ensure the accuracy of these measurements in the United States.

What is the National Artificial Intelligence Initiative Act of 2020?

The National Artificial Intelligence Initiative Act of 2020, signed into law as part of the National Defense Authorization Act on January 1, 2021, is US legislation aimed at coordinating and advancing national AI research and policy.

The Act mandates the establishment of a National Artificial Intelligence Initiative Office, which serves as the central hub for coordinating federal AI activities, as well as the creation of a National AI Advisory Committee, which is tasked with advising the President and the National AI Initiative Office. This committee comprises members from academia, industry, and federal laboratories.

How does NIST AI RMF define an AI system?

NIST AI RMF defines an AI system as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.”

How many actions are listed in the NIST Generative AI Profile?

The NIST Generative AI profile includes 467 actions, each corresponding to a specific core function of the NIST AI Risk Management Framework.

What are the main functions of NIST AI RMF Core?

NIST AI RMF is centered on four core functions:

  • Govern: Intentionally cultivate a culture of AI risk management
  • Map: Identify and contextualize AI risks
  • Measure: Assess, analyze, and track AI risks
  • Manage: Prioritize and mitigate AI risks based on projected impact