October 2024

Shaping the Future of General-Purpose AI: Europe Begins Drafting the First Code of Practice for General-Purpose AI Models

The European AI Office has begun drafting the first-ever Code of Practice for General Purpose AI (GPAI) models, which will serve as a guiding framework under the EU AI Act to ensure compliance. Over 1,000 stakeholders were involved in the drafting, and the process is structured over four rounds of input from working groups led by independent experts. The Code of Practice establishes guidelines for providers of GPAI models to maintain up-to-date technical documentation, manage risks, and document systemic risks in a cumulative risk-based classification. The final version is expected to be published in April 2025, ahead of the 2 August 2025 deadline when the relevant provisions for GPAI models under the EU AI Act begin applying. Failure to comply with the Code of Practice or the AI Act may result in the introduction of common rules through implementing acts by the European Commission.

September 2024

California Governor Newsom vetoes Safe & Secure AI Innovation Bill (SB1047)

California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act (“SB1047”), which aimed to establish a comprehensive regulatory framework for the development and deployment of large-scale AI solutions in California. The bill would have set strict safety standards for developers of powerful AI models, but also faced opposition from the industry over concerns regarding stifling innovation and business growth. Governor Newsom vetoed the bill citing the need for a more thorough analysis of industry practice. The veto comes amidst a growing regulatory agenda on AI, with 10 AI bills signed by Newsom recently.

California 2024 AI Legislation Roundup

California, home to 32 of the top 50 AI companies globally, has enacted several new laws and regulations related to AI, focusing on transparency, ethical use, and protecting individuals from harmful AI-generated content. Key legislative activity includes laws prohibiting non-consensual deepfake content, protecting against unauthorized sharing of intimate digital images, mandating clear disclosures for digitally manipulated political advertising, and imposing stricter requirements on collecting and sharing personal information. Governor Newsom is set to decide on pending bills mandating AI developer disclosures and setting safety standards for powerful AI systems. Controversial bills face significant opposition from industry leaders, who argue they could stifle innovation, while supporters believe they could serve as a model for future regulations. Other proposed laws regulate AI in healthcare, transportation, and communications, with specific provisions for autonomous vehicles, healthcare decision-making, and consumer privacy. Stakeholders across various industries are closely monitoring these developments as California continues to lead efforts in AI regulation.

Can voluntary AI frameworks be used to bring legal action?

NHelp, EPIC, and Upturn have filed a complaint against Deloitte over the accuracy of its Texas Integrated Eligibility Redesign System (TIERS), which is used to evaluate individuals' eligibility for Medicaid. The complaint accuses Deloitte of violating the Federal Trade Commission (FTC) Act, claiming that the system is deceptive and unfair, and cites several established voluntary AI frameworks including the OECD AI Principles and the Universal Guidelines for AI to support its case. The complaint calls for Deloitte to adopt these frameworks, increase transparency, and implement harm mitigation measures. The complaint highlights the value in voluntarily adopting responsible AI frameworks to reduce reputational and legal risks.

The Data Governance Regime under the EU AI Act: Intersections with the GDPR and Personal Data Protection

The EU AI Act focuses on data governance and management in AI development, with strict requirements for high-risk AI systems and general-purpose AI models to comply with. The Act also addresses the interplay between AI governance and personal data protection law, particularly the GDPR. The Act introduces new legal grounds for personal data processing and also mandates impact assessments to address risks to fundamental human rights and freedoms. However, AI technologies present unique challenges for privacy and personal data protection, and enterprises may need to navigate compliance with multiple regulatory frameworks. Proper preparation is vital to avoid the harsh consequences of non-compliance.