July 2024

The EU AI Act Published in the Official Journal of the EU

The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.

May 2024

Singapore Unveils Comprehensive Framework for Governing Generative AI

Singapore released the Model AI Governance Framework for Generative AI in May 2024, which offers a comprehensive approach to managing the challenges of generative AI and encourages a global dialogue on the topic. The framework covers nine dimensions related to AI governance, including accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D, and AI for public good. The framework stresses the importance of collaboration between policymakers, industry stakeholders, researchers, and like-minded jurisdictions. It calls for accountability in AI development and usage, responsible data governance, and democratization of AI access.

Colorado passes law enacting consumer protections for AI

Colorado has passed a law to protect consumers in their interactions with AI systems, SB24-205, which mandates regulation of high-risk AI systems and prevents algorithmic discrimination. Developers of high-risk systems must take reasonable precautions to prevent algorithmic discrimination, such as disclosing information and conducting impact assessments. High-risk system deployers must implement risk management protocols, conduct impact assessments, and provide consumers opportunities to correct incorrect data. The law applies to any person who does business in Colorado, and enforcement lies solely in the power of the Attorney General. The law provides defenses for developers or deployers if they comply with a nationally or internationally recognized AI risk management framework. There are no specified penalties for violations, and compliance can be maximized with Holistic AI's Governance Platform.

April 2024

Biden Executive Order 14110 – A 6-Month Update

In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence, which established AI safety and security standards, promoted consumer protection, enabled AI innovation, and advanced American technology leadership. Federal departments and agencies have successfully completed all tasks within 90 days, 150 days, and 180 days. These tasks included establishing disclosure requirements for developers of the most advanced AI systems, evaluating potential AI risks affecting critical infrastructure, preventing foreign entities from developing AI for malicious purposes, expanding investment into AI innovation and talent attraction, and creating an AI task force to expand AI innovation in healthcare. Further actions include the establishment of AI Governance Boards, submission of a report on authenticating, labeling, and detecting synthetic content, and identifying clinical errors arising from the use of AI in healthcare. The EO sets foundations for AI risk management, regulation, and innovation in the US.

International Joint Guidance on Deploying AI Systems Securely

The American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) collaborated with international agencies to release a joint guidance on Deploying AI Systems Securely. The guidance advises organizations to implement robust security measures to prevent misuse and data theft, and provides best practices for deploying and using externally developed AI systems. The guidance recommends three overarching best practices: secure the deployment environment, continuously protect the AI system, and secure AI operation and maintenance. The joint guidelines are voluntary but are encouraged to be adapted by all institutions that deploy or use externally developed AI systems. Compliance is vital to uphold trust and innovate with AI safely.