May 2023

European Parliamentary Committees Adopt AI Act Text and Set Date for Plenary Adoption

The EU AI Act is a piece of legislation proposed by the European Commission to regulate the AI systems available in the EU market. The Act takes a risk-based approach to regulation, classifying systems as posing minimal risk, limited risk, high risk, or unacceptable levels of risk. The December 2022 text defined AI as a machine-based system designed to operate with autonomy that can generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. The adopted text specifies eight broad high-risk applications of AI, with important updates to biometric and biometrics-based systems. A new Fundamental Rights Impact Assessment obligation has been introduced for users of High-Risk AIs, and prohibited practices now include AI models used for biometric categorization, predictive policing, and the collection of facial images for database construction. The EU AI Act will have important implications for the fairness and safety of AI systems available in the EU market, with deployers and users of AI systems facing a number of obligations.

Draft for Conducting Independent Audits under the Digital Services Act Released for Public Comment

The European Commission released a draft for conducting audits under the Digital Services Act (DSA) on May 6, 2023, which pertains to the 17 Very Large Online Platforms (VLOPs) and 2 Very Large Online Search Engines (VLOSEs, including Facebook, LinkedIn, Twitter, Bing, and Google). The purpose of this delegated regulation is to promote transparency and public accountability for large platforms, with provisions for annual independent audits. Algorithmic systems will be audited and will include disclosures and risk assessments. The draft clarifies the relationship between Audited Providers and Auditing Organizations, and lays down provisions for selecting auditors, data sharing, and cooperation. Auditing Organisations will send Final Reports, including Risk Analyses and Audit Conclusions, and must be completed within a year from the date of application of the obligations to the Audited Provider. The draft is open for public comments until June 2, 2023. The article promotes Holistic AI's interdisciplinary approach for AI governance, risk, and compliance.

March 2023

The UK Government Publishes a Pro-Innovation Approach to AI Regulation

The UK Government has published a White Paper outlining a regulatory framework for AI, based on five key principles of safety, transparency, fairness, accountability and contestability. The approach seeks to promote responsible innovation and maintain public trust. The White Paper establishes a multi-regulator sandbox and recommends practical guidance to help businesses put these principles into practice.

AI Regulation Around the World: Spain

Spain is actively regulating AI through various initiatives, including launching the first regulatory sandbox for the EU AI Act to create a controlled environment for experimenting with AI obligations, publishing a National AI Strategy, establishing Europe's first AI Supervisory Agency, and passing a Rider Law to give delivery riders employment rights. The Spanish government is investing in these regulatory efforts and has set specific objectives to reduce social inequality and promote innovation while protecting individual and collective rights. These regulations aim to increase transparency and accountability for algorithmic systems and ensure compliance with upcoming AI legislation.

February 2023

AI Regulation Around the World: The Netherlands

The Dutch government is increasing oversight on AI systems following a scandal involving a biased algorithm used by their tax office. The government is committed to a new statutory regime that ensures AI systems are checked for transparency and discrimination, and the data protection regulator will receive extra funding for algorithm oversight. The Dutch government wants more transparency about AI systems deployed in the public sector and is proposing a legal requirement to use an assessment framework, a register of high-risk AI systems, and specific measures for human oversight and non-discrimination. The proposals currently apply to the public sector but will likely impact businesses supplying AI systems to the public sector and create greater public awareness of the use of AI systems.