September 2023

The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI), highlighting 12 key challenges to AI governance policymakers should keep in mind when developing AI frameworks. The report recommends that an AI bill should be introduced into Parliament in the coming months to support the UK’s aspirations of becoming an AI governance leader. The Committee also recognised that if an AI bill is not introduced before the general election, the UK could be left behind by the EU and US who have already made significant legislative progress towards regulating AI.
August 2023

Spain has established a new regulatory body, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will oversee the country's National Artificial Intelligence Strategy and ensure AI development aligns with principles of inclusivity, sustainability, and welfare. AESIA is also expected to enforce the EU's landmark AI Act, which each EU member state must establish a supervisory authority to support the implementation and application of the legislation. The establishment of AESIA comes as part of Spain's Digital Spain 2025 Agenda, a €600 million initiative aiming to shape the country's digital future.

The EU AI Act will impose obligations on insurance providers using AI, with requirements evolving through rounds of legislative fine-tuning and negotiations. Initially, insurance practices were not considered high-risk, but amendments from the Slovenian Presidency and European Parliamentary Committees brought AI systems used for insurance premium setting, underwriting, and claims assessments under high-risk requirements. The final text adopted in June 2023 focused on AI systems used to make or influence decisions about eligibility for health and life insurance as high-risk applications. Insurance providers using AI for this purpose must comply with seven requirements for high-risk systems, including risk management, data governance, transparency, human oversight, and accuracy and cybersecurity. Non-compliance could result in a fine of up to 40 million euros or 7% of global turnover.

The European Union has updated the EU AI Act with guidelines for regulating foundation models and generative AI, which have the potential for both benefits and harm. While foundation models are multi-purpose and versatile, they could generate dangerous content, biased results, and data breaches. Generative AI could also produce copyright-infringing content and disinformation. The EU AI Act mandates obligations on providers of foundation models and generative AI, requiring risk reduction, data governance, transparency, and cooperation across the AI value chain. The EU AI Act also defines foundation models as AI models developed for versatility and ease of deployment across multiple contexts, and generative AI as AI systems capable of producing complex content with varying autonomy levels.

California is taking significant steps towards regulating AI, with multiple proposed laws aimed at making AI systems safer and fairer. AB-331 seeks to prohibit the use of automated decision tools that contribute or result in algorithmic discrimination, while the California Workplace Technology Accountability Act focused on regulating worker information systems and electronic monitoring in the workplace. Modifications have also been proposed to California’s existing employment regulations to address the use of AI in employment decisions. Additionally, SB-313 seeks to establish the Office of Artificial Intelligence within the Department of Technology to guide the design, use, and deployment of automated systems used by state agencies with the aim of minimizing bias.