December 2022

EU AI Act: Summary of Updates on Final Compromise Text

The EU ministers have greenlit the adoption of a general approach to the EU AI Act, which aims to balance fundamental rights and the promotion of AI innovation by defining AI, expanding the scope of the act, clarifying governance, extending the prohibition of social scoring to private actors, designating high-risk systems, and clarifying compliance feasibility for high-risk systems. The final text includes several changes to increase transparency and simplify required conformity assessments. The European Council will now negotiate with the European Parliament, with an agreement expected to be reached by early 2024. Businesses are advised to take steps to manage the risks of AI systems to embrace AI with greater confidence.

November 2022

Spain's Rider Law: Algorithmic Transparency and Worker Rights

Spain has launched the first regulatory sandbox to test new rules for artificial intelligence (AI) and algorithmic systems as part of the EU AI act. The country has also introduced a rider-law, Royal Decree-Law 9/2021, to safeguard employment rights of delivery workers who work through digital platforms. The law includes a presumption of employment for riders who work under a digital platform algorithm, giving them additional job security and safety provisions. Employers are required to inform works councils of workers' parameters, rules, and instructions based on AI systems used as part of the platform. The Ministry of Labour has also published guidelines for complying with algorithmic transparency laws, including the GDPR where necessary.

October 2022

What Enterprises Need to Know About the EU’s AI Liability Directive

The EU has proposed the AI Liability Directive to make it easier for victims of AI-induced harm to prove liability and receive compensation for damages. The Directive reinforces the EU AI Act, which aims to prevent harm caused by AI. Enterprises that develop or deploy AI systems should establish robust AI risk management processes and prepare for compliance with the AI Act. The Directive empowers courts to order the disclosure of evidence regarding high-risk AI systems, and introduces a presumption of a causal link between non-compliance with relevant laws and AI-induced harm. Enterprises may be obliged to disclose information regarding their AI risk management framework, system design specifications, and oversight of the AI system. Claimants can be the injured individual, an insurance company, or the heirs of a deceased person. Enterprises should act now to establish robust AI risk management systems to ensure that their AI risks are detected, minimised, monitored and prevented.

July 2022

Pro-innovation: The UK’s Framework for AI Regulation

The UK government has proposed a pro-innovation framework for regulating artificial intelligence (AI) that is context-specific and based on the use and impact of the technology. The government plans to give broader direction for key principles relating to transparency, accountability, safety, security and privacy, a mechanism for redress or contestability of AI, and delegate responsibility for developing enforcement strategies to appropriate regulators. The approach aims to provide flexibility for regulators and avoid introducing unnecessary barriers to innovation, with regulators focusing on high-risk applications. The framework welcomes stakeholder views by September 26, 2022, ahead of a white paper with more granular details and implementation plans towards the end of the year.

Regulatory Sandboxes and The EU AI Act

The EU Artificial Intelligence (AI) Act is introducing regulatory sandboxes, which provide an environment for providers to test the compliance of their product before it is launched on the market. The sandboxes offer priority access to small and medium-sized enterprises and encourage member states to develop their own sandboxes. Spain will be piloting a regulatory sandbox aimed at testing the requirements of the legislation and how conformity assessments and post-market activities may be overseen. The pilot is expected to begin in October 2022 and results published by the end of 2023, with other member states potentially joining or developing their own sandboxes. Sandboxes can have benefits to both businesses and regulators but can also have limitations, such as the risk of abuse and potentially delaying innovation.