August 2024
The US Department of Commerce has announced progress in AI safety, security, and trustworthiness, 270 days after President Biden's executive order on the Safe, Secure, and Trustworthy Development of AI. The National Institute of Standards and Technology has introduced multiple updates to support the objectives of the executive order, including the Generative AI Profile and Secure Software Development Practices for Generative AI and Dual Use Foundation Models. The department has also released an open-source software called Dioptra to evaluate the resilience of AI models against adversarial attacks. Furthermore, the plan for Global Engagement on AI Standards aims to foster international cooperation and the development of AI-related standards.
July 2024
US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.
June 2024
Colorado has introduced several laws aimed at regulating the use of AI, including SB169 which prohibits discrimination in insurance practices, SB205 which provides consumer protections against algorithmic discrimination resulting from AI, HB1147 which addresses the use of deepfake content in electoral communications, and SB22-113 which mandates governance of AI-driven facial recognition technologies by state entities. The laws impose transparency, governance, and risk management obligations on insurers, developers, and deployers, and introduce penalties for non-compliance. As proposals for AI laws continue to accelerate, mapping exposure risks and staying on top of regulatory developments is essential for ensuring compliance.
May 2024
Colorado's Governor signed SB24-205 into law, providing consumer protections for artificial intelligence (AI) that will come into effect on February 1, 2026. The law requires developers and deployers of AI systems to demonstrate reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. Deployers must implement a risk management policy, while developers must take reasonable care to protect consumers and provide deployers with documentation. The law defines algorithmic discrimination as unlawful differential treatment or impact against individuals or groups on the basis of certain characteristics. The law does not apply to certain high-risk systems approved or used in compliance with federal agency standards.
Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.