Risk Management Profile for Artificial Intelligence and Human Rights

Risk Management Profile for Artificial Intelligence and Human Rights

US & Canada
Expert Community
Daniel Shin

Daniel Shin

17 Oct 2024

Introduction

On July 25, 2024, the U.S. Department of State (DOS) published its “Risk Management Profile for Artificial Intelligence and Human Rights” (profile). It aims to provide guidance for government, private sector, and civil society entities to design, develop, deploy, use, and govern artificial intelligence (AI) technologies that is consistent with international human rights. To anchor AI risk management strategies to international human rights law, DOS applied the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework 1.0 (AI RMF) to the human rights context that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. Although the Profile is a voluntary guidance, it serves as “soft governance” that signals organizations on how to interact with AI technologies, that is consistent with President Biden’s Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” (AI Executive Order) and the administration’s AI strategy of pursuing a “rights-based” approach to regulation. This article examines the potential high-level legal and policy implications arising from the document, and identifies the practical impact of the Profile on organizations and government entities with respect to AI governance.

Continue reading on

Holistic

AI Tracker

Create your account

Create a FREE account and access a number of articles, resources and guidance information.

Already have an account? Log In