June 2023

How to Prepare for the EU AI Act

The EU AI Act is a landmark piece of legislation that will comprehensively regulate AI in the European Union. The Act has a risk-based approach and grades AI systems according to four levels of risk. Businesses have around two-and-a-half years to prepare for the Act before it is enforced in 2026. Entities covered by the Act must prepare, including providers of AI systems, deployers, and those located in third countries. To prepare for the Act, organisations need to create an inventory of their AI systems, develop governance procedures and guidelines, educate their employees, invest in expertise and talent acquisition, and invest in the necessary technologies and infrastructure. Holistic AI offers a comprehensive solution for preparing for the EU AI Act.

Regulating AI in the EU: What Businesses Need to Know About the AI Act

The European Parliament has voted to move forward with the EU AI Act, which seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights. The act will have implications for providers, deployers and distributors of AI systems used in the EU. The act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories. The act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market. Businesses must use the preparatory period to build up their readiness and establish robust governance structures, build internal competencies, and implement requisite technologies. Holistic AI can assist organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions.

EU AI Act Text Passed by Majority Vote ahead of Trilogues

The European Parliament has passed the latest version of the EU AI Act, which will now proceed to the final Trilogue stage. The Act is a landmark piece of legislation proposed by the European Commission to regulate AI systems available in the EU market. It takes a risk-based approach to regulation, with systems classified as posing minimal, limited, high, or unacceptable levels of risk. The latest version aligns more closely with the OECD definition of AI and covers eight high-risk categories, including biometrics and biometric-based systems, management of critical infrastructure, and AI systems intended to be used for influencing elections. The Act also prohibits real-time remote biometric identification and places a focus on protecting EU citizens' rights and education.

May 2023

European Parliamentary Committees Adopt AI Act Text and Set Date for Plenary Adoption

The EU AI Act is a piece of legislation proposed by the European Commission to regulate the AI systems available in the EU market. The Act takes a risk-based approach to regulation, classifying systems as posing minimal risk, limited risk, high risk, or unacceptable levels of risk. The December 2022 text defined AI as a machine-based system designed to operate with autonomy that can generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. The adopted text specifies eight broad high-risk applications of AI, with important updates to biometric and biometrics-based systems. A new Fundamental Rights Impact Assessment obligation has been introduced for users of High-Risk AIs, and prohibited practices now include AI models used for biometric categorization, predictive policing, and the collection of facial images for database construction. The EU AI Act will have important implications for the fairness and safety of AI systems available in the EU market, with deployers and users of AI systems facing a number of obligations.

Draft for Conducting Independent Audits under the Digital Services Act Released for Public Comment

The European Commission released a draft for conducting audits under the Digital Services Act (DSA) on May 6, 2023, which pertains to the 17 Very Large Online Platforms (VLOPs) and 2 Very Large Online Search Engines (VLOSEs, including Facebook, LinkedIn, Twitter, Bing, and Google). The purpose of this delegated regulation is to promote transparency and public accountability for large platforms, with provisions for annual independent audits. Algorithmic systems will be audited and will include disclosures and risk assessments. The draft clarifies the relationship between Audited Providers and Auditing Organizations, and lays down provisions for selecting auditors, data sharing, and cooperation. Auditing Organisations will send Final Reports, including Risk Analyses and Audit Conclusions, and must be completed within a year from the date of application of the obligations to the Audited Provider. The draft is open for public comments until June 2, 2023. The article promotes Holistic AI's interdisciplinary approach for AI governance, risk, and compliance.