February 2023

AI Regulation Around the World: The Netherlands

The Dutch government is increasing oversight on AI systems following a scandal involving a biased algorithm used by their tax office. The government is committed to a new statutory regime that ensures AI systems are checked for transparency and discrimination, and the data protection regulator will receive extra funding for algorithm oversight. The Dutch government wants more transparency about AI systems deployed in the public sector and is proposing a legal requirement to use an assessment framework, a register of high-risk AI systems, and specific measures for human oversight and non-discrimination. The proposals currently apply to the public sector but will likely impact businesses supplying AI systems to the public sector and create greater public awareness of the use of AI systems.

December 2022

EU AI Act: Summary of Updates on Final Compromise Text

The EU ministers have greenlit the adoption of a general approach to the EU AI Act, which aims to balance fundamental rights and the promotion of AI innovation by defining AI, expanding the scope of the act, clarifying governance, extending the prohibition of social scoring to private actors, designating high-risk systems, and clarifying compliance feasibility for high-risk systems. The final text includes several changes to increase transparency and simplify required conformity assessments. The European Council will now negotiate with the European Parliament, with an agreement expected to be reached by early 2024. Businesses are advised to take steps to manage the risks of AI systems to embrace AI with greater confidence.

November 2022

Spain's Rider Law: Algorithmic Transparency and Worker Rights

Spain has launched the first regulatory sandbox to test new rules for artificial intelligence (AI) and algorithmic systems as part of the EU AI act. The country has also introduced a rider-law, Royal Decree-Law 9/2021, to safeguard employment rights of delivery workers who work through digital platforms. The law includes a presumption of employment for riders who work under a digital platform algorithm, giving them additional job security and safety provisions. Employers are required to inform works councils of workers' parameters, rules, and instructions based on AI systems used as part of the platform. The Ministry of Labour has also published guidelines for complying with algorithmic transparency laws, including the GDPR where necessary.

October 2022

What Enterprises Need to Know About the EU’s AI Liability Directive

The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.

July 2022

Pro-innovation: The UK’s Framework for AI Regulation

The UK government has proposed a pro-innovation framework for regulating artificial intelligence (AI) that is context-specific and based on the use and impact of the technology. The government plans to give broader direction for key principles relating to transparency, accountability, safety, security and privacy, a mechanism for redress or contestability of AI, and delegate responsibility for developing enforcement strategies to appropriate regulators. The approach aims to provide flexibility for regulators and avoid introducing unnecessary barriers to innovation, with regulators focusing on high-risk applications. The framework welcomes stakeholder views by September 26, 2022, ahead of a white paper with more granular details and implementation plans towards the end of the year.