Can voluntary AI frameworks be used to bring legal action?

Can voluntary AI frameworks be used to bring legal action?

Legal Action & Harms
Airlie Hilliard

Airlie Hilliard

18 Sep 2024

The AI legislative landscape is rapidly evolving, particularly in the US due to the fragmented nature of proposals at the local, state, and federal levels. While several AI laws have been passed in recent years, many are still making their way through the legislative process, and many of those that have passed have not yet come into effect or are awaiting regulation to support their enforcement. However, many developers and deployers are increasingly looking to voluntary frameworks and initiatives for guidance on AI governance best practices and to increase trust in their systems. These include the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the White House Blueprint for an AI Bill of Rights.

While these frameworks are completely voluntary and are designed to be flexible, a complaint to the Federal Trade Commission (FTC) from the National Health Law Program (NHeLP), Electronic Privacy Information Center (EPIC), and Upturn against Deloitte has cited these frameworks, as well as the OECD AI Principles, the Universal Guidelines for Artificial Intelligence, and Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of AI. In this blog post, we provide an overview of this unique complaint and examine how these frameworks are being used as supporting evidence in the complaint.

Continue reading on

Holistic

AI Tracker

FREE Member

Access to the Feed

Create a FREE account and access a number of articles, resources and guidance information.

Already have an account? Log In