October 2024
The US Department of State has published a Risk Management Profile for Artificial Intelligence and Human Rights, a voluntary guidance aimed at providing guidelines for governmental, private, and civil society entities on AI technologies, consistent with international human rights. The profile uses the National Institution of Standard and Technology's AI Risk Management Framework 1.0 that provides an AI lifecycle risk management strategy approach and examples of common organizational functions. The Profile is not exhaustive, yet it provides possible human rights implicating situations that organizations may encounter when using AI systems. It is a normative rationale for adopting the US’s approach to AI governance and risk mitigation strategies that will drive long-term considerations in this arena.
The use of AI technologies in financial institutions is increasing in areas such as marketing, process automation, and back-office functions. The EU AI Act introduces new regulations for AI-related risks in financial services, including strict requirements for high-risk AI systems used in credit assessments and life and health insurance evaluations. Financial institutions must observe transparency rules for all AI systems and comply with new AI-specific requirements for high-risk use cases. The Act allows for exclusion from high-risk status under certain conditions, and financial institutions must properly determine their role under the Act to comply with obligations. Some obligations can be integrated into existing legal frameworks for financial services. Proper preparation is important for compliance with the complex dimensions of AI governance in finance.
The European AI Office has initiated the drafting process for the first-ever Code of Practice for general-purpose AI (GPAI) models under the EU AI Act. The Code of Practice will serve as a guiding framework to align with the stringent requirements of the Act and ensure compliance. Over 1,000 stakeholders are involved in the drafting process, which will span four rounds of reviews and consultations, with the final version expected to be published in April 2025. The Code of Practice provides guidelines for GPAI model providers to demonstrate compliance with legal obligations, including identifying and addressing systemic risks. If the Code of Practice is not ready or deemed inadequate by 2 August 2025, the European Commission may introduce common rules to ensure compliance with the AI Act.
September 2024
California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Models Act, or SB1047, which aimed to regulate the development and deployment of large-scale AI solutions in the state. The bill, which set strict safety standards for developers of AI models costing over $100m, and empowered the California Attorney General to hold them accountable for negligence that caused harm, was opposed by Big Tech. Newsom acknowledged the need for regulatory measures on AI development but criticised the bill for not being informed by "an empirical trajectory analysis of AI systems".
California, home to 32 of the top 50 AI companies globally, has enacted several new laws and regulations related to AI, focusing on transparency, ethical use, and protecting individuals from harmful AI-generated content. Key legislative activity includes laws prohibiting non-consensual deepfake content, protecting against unauthorized sharing of intimate digital images, mandating clear disclosures for digitally manipulated political advertising, and imposing stricter requirements on collecting and sharing personal information. Governor Newsom is set to decide on pending bills mandating AI developer disclosures and setting safety standards for powerful AI systems. Controversial bills face significant opposition from industry leaders, who argue they could stifle innovation, while supporters believe they could serve as a model for future regulations. Other proposed laws regulate AI in healthcare, transportation, and communications, with specific provisions for autonomous vehicles, healthcare decision-making, and consumer privacy. Stakeholders across various industries are closely monitoring these developments as California continues to lead efforts in AI regulation.