May 2024

Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.
April 2024

In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, which established AI safety and security standards, promoted consumer protection, enabled AI innovation, and advanced American technology leadership. Federal departments and agencies have successfully completed all tasks within 90 days, 150 days, and 180 days. These tasks included establishing disclosure requirements for developers of the most advanced AI systems, evaluating potential AI risks affecting critical infrastructure, preventing foreign entities from developing AI for malicious purposes, expanding investment into AI innovation and talent attraction, and creating an AI task force to expand AI innovation in healthcare. Further actions include the establishment of AI Governance Boards, submission of a report on authenticating, labeling, and detecting synthetic content, and identifying clinical errors arising from the use of AI in healthcare. The EO sets foundations for AI risk management, regulation, and innovation in the US.

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.

The American Privacy Rights Act (APRA) proposal, released by two bipartisan committee chairs, could lead to the first national data privacy framework in the US. It aims to solve the challenge caused by an absence of a national standard, and includes several consumer privacy provisions, restricts data collection and use, and creates a national registry of data brokers. The APRA does not specifically address AI, but its broad domain means it inadvertently covers AI systems that process personal data. Industry leaders have responded positively, but lawmakers are disappointed in the lack of protections for minors and may introduce complementary legislation. The bill has not yet been formally introduced, and Holistic AI can help maximise compliance with the new regulations.

Several US federal agencies, including the EEOC, Consumer Financial Protection Bureau, and the Federal Trade Commission, have issued a joint statement emphasizing their commitment to enforcing legal protections against discrimination and bias in automated systems and AI. The agencies also stress the applicability of existing laws to automated systems and encourage responsible innovation. The statement details how each agency has already enforced legal protections in relation to AI and automated systems, highlighting the importance of compliance with both existing laws and AI-specific laws. The statement warns of potential sources of unlawful discrimination from the use of automated systems, including training data, lack of transparency, and flawed assumptions about users and societal impact.