February 2023
The National Institute of Standards and Technology (NIST) has launched the first version of the Artificial Intelligence Risk Management Framework (AI RMF 1.0), which is designed to help organizations ‘prevent, detect, mitigate, and manage AI risks’. The AI RMF is designed to promote the adoption of trustworthy AI systems which are safe, valid, reliable, fair, privacy-enhancing, transparent & accountable, secure & resilient, and explainable & interpretable. The framework is based around four key functions: map, measure, manage, and govern. NIST recommends that the AI RMF be applied at the beginning of the AI lifecycle and involve diverse groups of stakeholders. The focus is on moving beyond computational metrics and instead focusing on the socio-technical context of the development, deployment, and impact of AI systems. The end goal is to improve public trustworthiness of AI and address negative impacts such as societal biases, discrimination, and inequality.
January 2023
The National Institute of Standards and Technology (NIST) has launched its first version of the Artificial Intelligence Risk Management Framework (AI RMF) after 18 months of development. The framework is designed to help organisations prevent, detect, mitigate, and manage AI risks and promote the adoption of trustworthy AI systems. The AI RMF focuses on flexibility, measurement, and trustworthiness and requires organisations to cultivate a risk management culture. NIST anticipates that the feedback received from organisations using the framework will establish global gold standards in line with EU regulations.
August 2022
The US National Institute of Standards and Technology (NIST) has published a second draft of its AI Risk Management Framework (AI RMF), which explains how organizations should manage the risks of AI. The AI RMF is a set of voluntary guidelines to help prevent potential harms to people, organizations, or systems resulting from AI systems' development and deployment. The framework has four core elements - govern, map, measure, and manage - that aim to cultivate a culture of AI risk management, establish appropriate structures, policies and processes, understand the AI system's business value, and assess risks with bespoke metrics and methodologies. NIST expects AI risk management to become a core part of doing business by the end of the decade, just like privacy and cybersecurity.