May 2024

Singapore Unveils Comprehensive Framework for Governing Generative AI

Singapore released the Model AI Governance Framework for Generative AI in May 2024, which offers a comprehensive approach to managing the challenges of generative AI and encourages a global dialogue on the topic. The framework covers nine dimensions related to AI governance, including accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D, and AI for public good. The framework stresses the importance of collaboration between policymakers, industry stakeholders, researchers, and like-minded jurisdictions. It calls for accountability in AI development and usage, responsible data governance, and democratization of AI access.

Navigating the Governance Architecture of the EU AI Act

The EU AI Act introduces a governance structure to ensure coordinated and effective implementation and enforcement of AI regulations at the national and Union levels. The governance framework includes four entities: the AI Office, AI Board, Advisory Forum, and Scientific Panel, each with distinct roles and responsibilities. The AI Office leads the implementation and enforcement of the Act, while the AI Board advises and assists in its consistent application across the EU. The Advisory Forum provides technical expertise and stakeholder input, and the Scientific Panel supports the Act's implementation with scientific insights and guidance. Experts selected for these entities must possess relevant competencies, independence, and scientific or technical expertise in the field of AI. Compliance with the EU AI Act is crucial, and early adoption of its principles can enable smoother compliance.

Setting The Standards for AI: The EU AI Act’s Scheme for the Standardization of AI Systems

The EU AI Act introduces standardization instruments such as harmonized standards to facilitate compliance with the Act's requirements and obligations. Providers of high-risk AI systems and general-purpose AI models can enjoy a presumption of compliance if they follow these standardization tools. However, standardization is not mandatory, and providers who do not follow them may face additional workload and penalties for non-compliance. Harmonized standards are expected to cover the requirements for high-risk AI systems and the obligations of providers of GPAI models and GPAI models with systemic risk. Compliance with these standards can help bypass third-party conformity assessments for certain high-risk AI systems, but providers must still ensure compliance for requirements and obligations outside the scope of harmonized standards. The EU AI Act does not become fully operational until mid-2026, but market operators must prepare in advance to comply with the evolving regulatory framework around AI.

10 Things You Need to Know about Colorado’s SB205 Consumer Protections for Artificial Intelligence

Colorado's Governor signed SB24-205 into law, providing consumer protections for artificial intelligence (AI) that will come into effect on February 1, 2026. The law requires developers and deployers of AI systems to demonstrate reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. Deployers must implement a risk management policy, while developers must take reasonable care to protect consumers and provide deployers with documentation. The law defines algorithmic discrimination as unlawful differential treatment or impact against individuals or groups on the basis of certain characteristics. The law does not apply to certain high-risk systems approved or used in compliance with federal agency standards.

NIST AI RMF Generative AI Use Case Profiles

The National Institute of Standards and Technology (NIST) has released a draft AI RMF Generative AI Profile to help organizations identify and respond to risks posed by generative AI. The profile provides a roadmap for managing GAI-related challenges across various stages of the AI lifecycle, and offers proactive measures to mitigate the risks of GAI. Although voluntary, implementing an AI risk management framework can increase trust and increase ROI by ensuring your AI systems perform as expected.