Elements of NIST’s AI Risk Management Framework: An Overview

Elements of NIST’s AI Risk Management Framework: An Overview

NIST AI RMF
Siddhant Chatterjee

Siddhant Chatterjee & Ella Shoup

17 Apr 2024

Mandated under the National Artificial intelligence Initiative Act of 2020, the NIST AI RMF is a voluntary risk management framework to be a resource for organizations who design, develop, deploy or use AI systems. Developed by the US National institute for Standards and Technology (or NIST), this framework is intended to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific.

The NIST AI RMF is operationalised through a combination of five tools or elements, which help establish the principles a trustworthy AI system should have, what actions should be taken to ensure trustworthiness in an AI system’s development and deployment lifecycles, as well as practical guidance on doing the same.Second in our series on the NIST’s AI Risk Management Framework, this blog looks at the key elements of the AI RMF. These are the NIST Core, AI RMF Playbook, Roadmap, Crosswalks and Use-Case Profiles.

Continue reading on

Holistic

AI Tracker

FREE Member

Access to the Feed

Create a FREE account and access a number of articles, resources and guidance information.

Already have an account? Log In