October 2024

AI Regulation in Finance: What will be the Implications of the EU AI Act for Financial Services?

The use of AI in financial institutions is expanding and improving, but also presents new risks and challenges. The EU AI Act introduces a regulatory scheme for AI development and deployment in the EU, with significant implications for the use and governance of AI in financial services. AI systems used in credit assessment of individuals and evaluation for life and health insurance are considered high-risk under the Act and entail strict requirements. Financial institutions must properly determine their role under the Act and fulfill obligations for HRAIS, including technical documentation, data governance measures, risk management, conformity assessments, documentation keeping, and establishing a quality management system. Compliance can be facilitated by integrating some obligations into existing financial regulations, but proper preparation is vital.

Shaping the Future of General-Purpose AI: Europe Begins Drafting the First Code of Practice for General-Purpose AI Models

The European AI Office has started the drafting process of the first-ever Code of Practice for general-purpose AI (GPAI) models under the EU AI Act. The Code of Practice aims to provide guidelines for providers of GPAI models to comply with the stringent requirements of the Act. The drafting process involves four rounds of consultation and input from working groups. Once finalized, the Code of Practice will serve as an essential tool for providers of GPAI models to demonstrate compliance with the AI Act's requirements. The Code will remain in effect until a European harmonized standard is published. The relevant provisions for GPAI models will begin applying from 2 August 2025, and the final Code of Practice is expected to be published in April 2025. If the Code of Practice is not ready or deemed inadequate, the European Commission may intervene to introduce common rules through implementing acts.

September 2024

The Data Governance Regime under the EU AI Act: Intersections with the GDPR and Personal Data Protection

The EU AI Act focuses on data governance and management in AI development, with strict requirements for high-risk AI systems and general-purpose AI models to comply with. The Act also addresses the interplay between AI governance and personal data protection law, particularly the GDPR. The Act introduces new legal grounds for personal data processing and also mandates impact assessments to address risks to fundamental human rights and freedoms. However, AI technologies present unique challenges for privacy and personal data protection, and enterprises may need to navigate compliance with multiple regulatory frameworks. Proper preparation is vital to avoid the harsh consequences of non-compliance.

UK Introduces New Bill to Regulate Automated Decision-Making in the Public Sector

The UK is introducing the Public Authority Algorithmic and Automated Decision-Making Systems Bill (HL Bill 27) to regulate the use of automated and algorithmic tools in decision-making processes within public authorities. The proposed legislation applies to all algorithmic and automated decision-making systems developed or procured by public authorities, excluding those used for national security and routine calculations. The bill introduces requirements for algorithmic impact assessments, algorithmic transparency records, monitoring and auditing, employee training, and adherence to human rights and democratic values. The legislation aims to align with the Council of Europe's Framework Convention on AI.

The first internationally binding AI convention opens for signature

The Council of Europe's (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law has opened for signature, and has already been signed by several countries including the United States, the European Union, and the United Kingdom. The Convention applies primarily to States and certain international organizations. It establishes a comprehensive set of obligations designed to ensure that all activities throughout the AI system lifecycle align with human rights, democratic principles, and the rule of law. Its central aim is to ensure that AI systems are developed, deployed, and used in ways that respect human rights, democracy, and the rule of law. The Convention also sets out several guiding principles and a risk management framework for AI systems. The Convention is a pivotal step in the global governance of AI, expanding the geographic influence of such initiatives through the Council of Europe's broad reach.