April 15, 2024

AI Risk Management Framework from NIST - Understand current posture and create an effective roadmap for adoption of AI systems

In this blog, I will briefly outline the AI Risk Management Framework (AI RMF), the associated playbook and how an organization can leverage the playbook in conjunction with our “Rationalization and Assessment Platform” to assess the current posture and maturity levels.

NIST has created a framework and an associated playbook to assist organizations better manage AI related risks. As per NIST, “It aims to foster the development of innovative approaches to address characteristics of trustworthiness including valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhance, and fair with harmful bias managed.”

The framework and the playbook are intended to be useful for those who are involved in the lifecycle of AI systems including design, develop, use, or evaluate AI technologies. It’s a sector- and use-case agnostic framework for managing AI risks.

The playbook includes suggested actions, references, and related guidance to achieve the outcomes for the main functions related to AI risk management. The suggestions were developed based on best practices (collected via workshops and comments from over 200 organizations globally).

The following diagram depicts the main functions in the AI RMF – Govern, Map, Measure and Manage.

More details can be found here - https://www.nist.gov/itl/ai-risk-management-framework/ai-risk-management-framework-faqs


AI RMF playbook

The RMF playbook (a spreadsheet) contains actionable suggestions across the main functions of the framework. Each of the functions has various sub functions and associated suggested actions.

Let’s use an example from the playbook to understand how the functions and suggested actions are laid out. The GOVERN function has various subsections and one of them is “Policies, processes, and procedures” and is referred to as GOVERN 1.2 function.

This function highlights the usefulness of the associated “Policies, process and procedures”. It explains that they are central components of effective AI risk management and fundamental to individual and organizational accountability. All stakeholders benefit from policies, processes, and procedures which require preventing harm by design and default.

Suggested Actions – there are 15 suggested actions for this sub function. A few examples are listed below:

a) Define key terms and concepts related to AI systems and the scope of their purposes and intended uses.

b) Detail model testing and validation processes.

c) Establish whistleblower policies to facilitate reporting of serious AI system concerns.

d) Verify that formal AI risk management policies align to existing legal standards, and industry best practices

There are 72 subfunctions and approximately 300 specific actions that are organized by their functions. At a high level an organization can use the playbook in the following manner:

a) Review and identify organization specific sub functions and their associated suggested actions

b) Understand the current posture related to the suggested actions

c) Identify gaps and actions that need to be followed up with

d) Implement or act upon the actions

e) Implement a continuous tracking and governance mechanism and build a roadmap for adoption and use


Understand the current posture and assess your maturity level using the “Rationalization and Assessment” platform


The platform assists to conduct an objective assessment including the following:


a) Rules for objective assessments using the AHP (Analytic Hierarchy Processing) methodology

b) Manage the assessment process and collect user responses via a self-service survey portal

c) Allows admins to manage a repository of questions, responses, rules, profiles and insights/results

d) Create an effective roadmap by understanding the current posture and gaps

APPMODZ has created an AI Risk Management related assessment profile. This profile includes all the suggested actions from the NIST AI RMF playbook in the form of a set of questions and the corresponding potential responses. The profile includes a set of rules that provide a corresponding numeric score based on the response to a suggested action and a corresponding relative priority for the function. These scores are based on the AHP (Analytic hierarchy processing) methodology for objective assessments. For example, a score in the range of 1 to 9 is assigned fora response, where 9 is assigned to a response that represents maximum maturity.


An organization can use the assessment profile in their surveys and gather appropriate responses to all the suggested actions and understand the current posture or maturity level. This will allow an organization to create an effective roadmap for adoption of AI systems.

The following is an example of the default insights for the GOVERN function. This is a spider / radar chart that displays a maturity score for each of the 19 sub functions. The scores are calculated by the assessment engine using the gathered responses (and the associated maturity score) for the suggested actions.

Diagram – Maturity across the "GOVERN" functions:

An organization can use the default profile in our assessment product and can customize the profiles if required by the following actions:

a) Choosing only a subset of relevant questions for suggested actions

b) Updating the questions if required

c) Updating the appropriate responses based on the suggested actions

d) Update the maturity scores

e) Update the relative priority of the functions if required



With the rapid adoption of AI systems across many organizations we think that the framework and the playbook are an essential means for an organization to educate themselves on all the aspects of managing risks associated with adoption of AI systems. More importantly it provides an objective way to understand the current posture and build an adoption roadmap and implement a governance mechanism for continuous use of AI systems.

As the framework and the playbook are very generic and sector or industry agnostic, it may require a bit of due diligence to narrow down to the relevant/specific scope. NIST has announced that it will release an updated version of the playbook in the next few months, and it will be interesting to watch if it adds any industry or sector specific playbooks.

The APPMODZ's "Rationalization and Assessment" platform assists in accelerating the assessment effort with the use of ready to use profiles, and a self-service user portal to gather, and consolidate all the relevant responses in a simple to use manner and producing the desired insights thus reducing the overall effort by over 60%.

You can get started with our marketplace appliance for the assessment platform. Once you sign up for the appliance, you will receive the AI RMF profile on demand.


Contact us at info@appmodz.net for further details.