August 2023
Spain has established a new regulatory body, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will oversee the country's National Artificial Intelligence Strategy and ensure AI development aligns with principles of inclusivity, sustainability, and welfare. AESIA is also expected to enforce the EU's landmark AI Act, which each EU member state must establish a supervisory authority to support the implementation and application of the legislation. The establishment of AESIA comes as part of Spain's Digital Spain 2025 Agenda, a €600 million initiative aiming to shape the country's digital future.
The EU AI Act will impose obligations on insurance providers using AI, with requirements evolving through rounds of legislative fine-tuning and negotiations. Initially, insurance practices were not considered high-risk, but amendments from the Slovenian Presidency and European Parliamentary Committees brought AI systems used for insurance premium setting, underwriting, and claims assessments under high-risk requirements. The final text adopted in June 2023 focused on AI systems used to make or influence decisions about eligibility for health and life insurance as high-risk applications. Insurance providers using AI for this purpose must comply with seven requirements for high-risk systems, including risk management, data governance, transparency, human oversight, and accuracy and cybersecurity. Non-compliance could result in a fine of up to 40 million euros or 7% of global turnover.
The European Union has updated the EU AI Act with guidelines for regulating foundation models and generative AI, which have the potential for both benefits and harm. While foundation models are multi-purpose and versatile, they could generate dangerous content, biased results, and data breaches. Generative AI could also produce copyright-infringing content and disinformation. The EU AI Act mandates obligations on providers of foundation models and generative AI, requiring risk reduction, data governance, transparency, and cooperation across the AI value chain. The EU AI Act also defines foundation models as AI models developed for versatility and ease of deployment across multiple contexts, and generative AI as AI systems capable of producing complex content with varying autonomy levels.
California is taking significant steps towards regulating AI, with multiple proposed laws aimed at making AI systems safer and fairer. AB-331 seeks to prohibit the use of automated decision tools that contribute or result in algorithmic discrimination, while the California Workplace Technology Accountability Act focused on regulating worker information systems and electronic monitoring in the workplace. Modifications have also been proposed to California’s existing employment regulations to address the use of AI in employment decisions. Additionally, SB-313 seeks to establish the Office of Artificial Intelligence within the Department of Technology to guide the design, use, and deployment of automated systems used by state agencies with the aim of minimizing bias.
Artificial intelligence (AI) and automation are rapidly transforming the insurance sector, with $8 billion invested in insurtech start-ups between 2018 and 2019. However, the use of algorithms in insurance has come under fire for biased outcomes, resulting in policy makers introducing regulation targeting the algorithms used in insurance. Colorado's Senate Bill 21-169 and the European Commission's EU AI Act seek to prohibit insurers from unfair discrimination based on protected characteristics and ensure that AI systems meet certain obligations. The National Association of Insurance Commissioners has also emphasized the importance of accountability, compliance, and transparency in the use of AI in insurance throughout its entire lifecycle.