October 2024

Risk Management Profile for Artificial Intelligence and Human Rights

The US Department of State has published a "Risk Management Profile for Artificial Intelligence and Human Rights" to provide guidance on how to design, develop, deploy, use and govern AI technology that is consistent with international human rights. The profile serves as "soft governance" that signals organizations on how to interact with AI technologies that aligns with President Biden's Executive Order 14110 and the administration's AI strategy of pursuing a "rights-based" approach to regulation. The profile integrates NIST's AI Risk Management Framework 1.0 to the human rights context to provide an AI lifecycle risk management strategy approach. The AI RMF provides lifecycle risk management for AI systems, acknowledging that human-rights-based risks may emerge at any stage of the AI lifecycle.

AI Regulation in Finance: What will be the Implications of the EU AI Act for Financial Services?

The use of AI in financial institutions is expanding and improving, but also presents new risks and challenges. The EU AI Act introduces a regulatory scheme for AI development and deployment in the EU, with significant implications for the use and governance of AI in financial services. AI systems used in credit assessment of individuals and evaluation for life and health insurance are considered high-risk under the Act and entail strict requirements. Financial institutions must properly determine their role under the Act and fulfill obligations for HRAIS, including technical documentation, data governance measures, risk management, conformity assessments, documentation keeping, and establishing a quality management system. Compliance can be facilitated by integrating some obligations into existing financial regulations, but proper preparation is vital.

Shaping the Future of General-Purpose AI: Europe Begins Drafting the First Code of Practice for General-Purpose AI Models

The European AI Office has started the drafting process of the first-ever Code of Practice for general-purpose AI (GPAI) models under the EU AI Act. The Code of Practice aims to provide guidelines for providers of GPAI models to comply with the stringent requirements of the Act. The drafting process involves four rounds of consultation and input from working groups. Once finalized, the Code of Practice will serve as an essential tool for providers of GPAI models to demonstrate compliance with the AI Act's requirements. The Code will remain in effect until a European harmonized standard is published. The relevant provisions for GPAI models will begin applying from 2 August 2025, and the final Code of Practice is expected to be published in April 2025. If the Code of Practice is not ready or deemed inadequate, the European Commission may intervene to introduce common rules through implementing acts.

September 2024

California Governor Newsom vetoes Safe & Secure AI Innovation Bill (SB1047)

California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act (“SB1047”), which aimed to establish a comprehensive regulatory framework for the development and deployment of large-scale AI solutions in California. The bill would have set strict safety standards for developers of powerful AI models, but also faced opposition from the industry over concerns regarding stifling innovation and business growth. Governor Newsom vetoed the bill citing the need for a more thorough analysis of industry practice. The veto comes amidst a growing regulatory agenda on AI, with 10 AI bills signed by Newsom recently.

California 2024 AI Legislation Roundup

California, home to 32 of the top 50 AI companies globally, has enacted several new laws and regulations related to AI, focusing on transparency, ethical use, and protecting individuals from harmful AI-generated content. Key legislative activity includes laws prohibiting non-consensual deepfake content, protecting against unauthorized sharing of intimate digital images, mandating clear disclosures for digitally manipulated political advertising, and imposing stricter requirements on collecting and sharing personal information. Governor Newsom is set to decide on pending bills mandating AI developer disclosures and setting safety standards for powerful AI systems. Controversial bills face significant opposition from industry leaders, who argue they could stifle innovation, while supporters believe they could serve as a model for future regulations. Other proposed laws regulate AI in healthcare, transportation, and communications, with specific provisions for autonomous vehicles, healthcare decision-making, and consumer privacy. Stakeholders across various industries are closely monitoring these developments as California continues to lead efforts in AI regulation.