July 2024

Senators Hinkenlooper and Capito introduce VET AI Act to create guidelines for AI Assurance

US Senators John Hickenlooper and Shelley Moore Capito have introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act to establish guidelines for third-party AI audits. The bill requires the Director of the National Institute of Standards and Technology (NIST) to develop voluntary guidelines for internal and external assurances of artificial intelligence systems. AI assurance is divided into two kinds: internal assurance and external assurance. The Director of NIST must develop voluntary guidelines for both kinds of AI assurance, addressing best practices, methodologies, procedures, and processes for assurance concerning consumer privacy, harm assessment and mitigation, dataset quality, documentation and communication, and governance and process controls.

International competition authorities publish a joint statement on competition in generative AI

Competition authorities from the UK, US, and EU have published a joint statement outlining potential risks to fair competition that can emerge from generative AI and the principles needed to support competition and innovation while protecting consumers. These principles include fair dealing, interoperability, and choice, with a focus on informing consumers about when and how AI is used in products and services. Agencies in the US and UK are cracking down on AI risks and becoming increasingly vocal about the need to ensure AI complies with existing laws and does not harm consumers.

The EU AI Act Published in the Official Journal of the EU

The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.

May 2024

Singapore Unveils Comprehensive Framework for Governing Generative AI

Singapore released the Model AI Governance Framework for Generative AI in May 2024, which offers a comprehensive approach to managing the challenges of generative AI and encourages a global dialogue on the topic. The framework covers nine dimensions related to AI governance, including accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D, and AI for public good. The framework stresses the importance of collaboration between policymakers, industry stakeholders, researchers, and like-minded jurisdictions. It calls for accountability in AI development and usage, responsible data governance, and democratization of AI access.

Colorado passes law enacting consumer protections for AI

Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.