hero-two-bg

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (AI RMF) is a comprehensive set of guidelines and best practices designed to help organizations manage the risks associated with artificial intelligence (AI) systems. It aims to improve the trustworthiness, fairness, transparency, and accountability of AI technologies.

Request a demo of Secureframe Custom Frameworksangle-right

Definition and purpose

The primary purpose of the NIST AI RMF is to provide a structured approach to identifying, assessing, managing, and mitigating the risks associated with AI systems. The framework seeks to promote the responsible development and deployment of AI technologies, ensuring they are reliable, safe, and aligned with ethical standards.

Governing Body

The NIST AI RMF is developed and maintained by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce.

Last updated

The NIST AI RMF is a relatively new initiative, and its first version was released in January 2023.

Applies to

The NIST AI RMF applies to a wide range of industries and organizations involved in the development, deployment, and use of AI systems. This includes sectors such as healthcare, finance, transportation, manufacturing, and public services, among others.

Controls and requirements

The NIST AI RMF outlines a set of core functions and controls to manage AI risks effectively. These include:

  1. Govern: Establish organizational policies, procedures, and structures to manage AI risks.
  2. Map: Identify and contextualize AI systems and associated risks.
  3. Measure: Analyze and assess AI system performance and risk.
  4. Manage: Implement and monitor controls to mitigate AI risks.

For a complete list of controls and requirements, please refer to the official NIST AI RMF documentation.

Audit type, frequency, and duration

Audits for NIST AI RMF compliance typically involve internal assessments, third-party evaluations, or both. These audits review the organization's implementation of the framework's guidelines and best practices. The frequency of audits can vary based on organizational policies, regulatory requirements, and the criticality of the AI systems involved. Regular assessments, such as annual reviews, are commonly recommended.

The duration of an audit depends on the size and complexity of the organization, the scope of the AI systems being evaluated, and the depth of the audit. It can range from a few days to several weeks.

Get compliant using Secureframe Custom Frameworks

Request a demoangle-right
cta-bg