May 2024
The National Institute of Standards and Technology (NIST) has released a draft AI RMF Generative AI Profile to help organizations identify and respond to risks posed by generative AI. The profile provides a roadmap for managing GAI-related challenges across various stages of the AI lifecycle, and offers proactive measures to mitigate the risks of GAI. Although voluntary, implementing an AI risk management framework can increase trust and increase ROI by ensuring your AI systems perform as expected.
Enterprises are turning to voluntary frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), to reduce legal, reputational, and financial risk of their AI deployment. The AI RMF is a flexible framework that supports organizations using AI to manage the risks associated with it, encompassing four key functions of Govern, Map, Measure, and Manage. The Playbook serves as a practical companion to the AI RMF, offering actionable and adaptable guidance. The Govern function is critical to successful AI risk management, followed by the Map, Measure, and Manage functions. Each function includes suggested actions and recommended transparency and documentation practices. Prioritizing AI governance through AI risk management frameworks, such as the AI RMF, can increase trust and enhance ROI for AI systems.
April 2024
The NIST AI RMF is a voluntary risk management framework mandated under the National Artificial intelligence Initiative Act of 2020. It is designed to help organizations manage the risks of AI, promote trustworthy and responsible development and use of AI systems while being rights-preserving and non-sector specific. The framework is operationalised through a combination of five tools or elements, which include the NIST Core, AI RMF Playbook, Roadmap, Crosswalks, and Use-Case Profiles. The NIST Core provides the foundation for trustworthy AI systems, with four key functions, Govern, Map, Measure, and Manage, to guide organizations in development and deployment across various domains. The AI RMF Playbook offers actionable guidance for implementing the AI RMF's functions through detailed sub-actions. The AI RMF Roadmap outlines NIST's strategy for advancing the AI RMF, focusing on collaboration and key activities to maintain its relevance. The AI RMF Crosswalks are a mapping guide that supports users on how adopting one risk framework can be used to meet the criteria of the other. Finally, the AI RMF Use-case profiles provide tailored implementations of the AI RMF's functions and actions, catering to various sectors and use-cases.
March 2024
25 Mar 2024
The National Institute for Standards and Technology (NIST) has released a voluntary risk management framework called the AI Risk Management Framework (AI RMF) to help organizations manage the risks associated with AI systems. The framework is adaptable to organizations of all sizes and includes four specific functions: Govern, Map, Measure, and Manage. The AI RMF also emphasizes four key themes: Adaptability, Accountability, Diversity, and Iteration. The framework is a resource for organizations who design, develop, deploy, or use AI systems and was developed following an 18-month consultation process with private and public sector groups.
January 2024
The Federal Artificial Intelligence Risk Management Act of 2024 has been introduced in the US Congress, requiring federal agencies to comply with the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework, designed to help organizations prevent, detect, mitigate, and manage AI risks, sets out four key processes including mapping, measuring, managing, and governing. The Act also includes guidance for agencies on incorporating the AI RMF, reporting requirements, and regulations on AI acquisition. Compliance with NIST’s AI Risk Management Framework may soon become a legal requirement, as several states and federal laws already draw on it.