September 2023
The Governor of California, Gavin Newsom, has issued an executive order on artificial intelligence (AI), outlining a strategic plan for the responsible design, development, integration, and management of emerging AI technologies. The order acknowledges the potential benefits and risks associated with generative AI tools and calls for a united governance approach to address these challenges. Among the requirements for state agencies are the submission of a report within 60 days of order issuance, detailing the “most significant, potentially beneficial use cases” for the implementation and integration of generative AI tools, and a risk analysis of potential threats and vulnerabilities of California’s critical energy infrastructure related to generative AI by March 2024. The order also establishes guidelines for public sector procurement, sets up a pilot programme, and mandates training for state government workers' use of generative AI tools to achieve equitable outcomes by no later than July 2024.
The European Commission has designated Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft as "gatekeepers" under the Digital Markets Act (DMA). These companies operate 22 core platform services subject to new regulations promoting fair competition and consumer choice. Gatekeepers must conduct independent annual audits of their customer profiling methods and comply with rules relating to interactions with other businesses, consumers, advertisers, and publishers on their platforms. Failure to comply could result in fines and periodic penalty payments. The DMA will work in tandem with the AI Act and Digital Services Act.
The UK House of Commons Committee on Science, Innovation and Technology has published an interim report on the governance of artificial intelligence (AI), highlighting 12 key challenges to AI governance policymakers should keep in mind when developing AI frameworks. The report recommends that an AI bill should be introduced into Parliament in the coming months to support the UK’s aspirations of becoming an AI governance leader. The Committee also recognised that if an AI bill is not introduced before the general election, the UK could be left behind by the EU and US who have already made significant legislative progress towards regulating AI.
August 2023
Spain has established a new regulatory body, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will oversee the country's National Artificial Intelligence Strategy and ensure AI development aligns with principles of inclusivity, sustainability, and welfare. AESIA is also expected to enforce the EU's landmark AI Act, which each EU member state must establish a supervisory authority to support the implementation and application of the legislation. The establishment of AESIA comes as part of Spain's Digital Spain 2025 Agenda, a €600 million initiative aiming to shape the country's digital future.
The EU AI Act will impose obligations on insurance providers using AI, with requirements evolving through rounds of legislative fine-tuning and negotiations. Initially, insurance practices were not considered high-risk, but amendments from the Slovenian Presidency and European Parliamentary Committees brought AI systems used for insurance premium setting, underwriting, and claims assessments under high-risk requirements. The final text adopted in June 2023 focused on AI systems used to make or influence decisions about eligibility for health and life insurance as high-risk applications. Insurance providers using AI for this purpose must comply with seven requirements for high-risk systems, including risk management, data governance, transparency, human oversight, and accuracy and cybersecurity. Non-compliance could result in a fine of up to 40 million euros or 7% of global turnover.