July 2023

What is the EU AI Act?

The EU AI Act is proposed legislation aimed at creating a global standard for protecting users of AI systems from preventable harm. The Act outlines a risk-based approach, establishing obligations for AI systems based on their level of risk. High-risk systems are subject to more stringent requirements, including continuous risk management, data governance practices, technical documentation, and transparency provisions. The Act also prohibits certain practices deemed to pose too high of a risk, such as the use of subliminal techniques or exploitative practices. Non-compliance with the regulation can result in steep penalties of up to €40 million or 7% of global turnover, whichever is higher. The Act will have far-reaching implications and affect entities that interact with the EU market, even if they are based outside of the EU. The enforcement date of the EU AI Act is dependent on several stages of the EU legislative process.

The UK’s AI Regulation: From Guidance to Strategies

Despite emerging regulation on artificial intelligence (AI) around the world, the UK government has yet to propose any AI-specific regulation. Instead, individual departments have published a series of guidance papers and strategies to provide a framework for those using and developing AI within the UK. These include an AI auditing framework, guidance on explaining decisions made with AI, guidance on AI and data protection, a national data strategy, and a national AI strategy. The latest developments include a pro-innovation approach to regulating AI, the publication of an AI action plan, and the launch of the Centre for Data Ethics and Innovation’s portfolio of AI assurance techniques. The UK government's proposals aim to cement the UK's role as an AI superpower over the next 10 years by investing in infrastructure and education, and adopting a dynamic and adaptable approach to regulation.

June 2023

How to Prepare for the EU AI Act

The EU AI Act is a landmark piece of legislation that will comprehensively regulate AI in the European Union. The Act has a risk-based approach and grades AI systems according to four levels of risk. Businesses have around two-and-a-half years to prepare for the Act before it is enforced in 2026. Entities covered by the Act must prepare, including providers of AI systems, deployers, and those located in third countries. To prepare for the Act, organisations need to create an inventory of their AI systems, develop governance procedures and guidelines, educate their employees, invest in expertise and talent acquisition, and invest in the necessary technologies and infrastructure. Holistic AI offers a comprehensive solution for preparing for the EU AI Act.

Regulating AI in the EU: What Businesses Need to Know About the AI Act

The European Parliament has voted to move forward with the EU AI Act, which seeks to lead the world in AI regulation and create an ‘ecosystem of trust’ that manages AI risk and prioritises human rights. The act will have implications for providers, deployers and distributors of AI systems used in the EU. The act takes a risk-based approach to regulating AI, where obligations are proportional to the risk posed by a system based on four risk categories. The act seeks to set the global standard for AI regulation, affecting entities around the world that operate in the EU or interact with the EU market. Businesses must use the preparatory period to build up their readiness and establish robust governance structures, build internal competencies, and implement requisite technologies. Holistic AI can assist organisations to achieve compliance with the EU AI Act through its comprehensive suite of solutions.

EU AI Act Text Passed by Majority Vote ahead of Trilogues

The European Parliament has passed the latest version of the EU AI Act, which will now proceed to the final Trilogue stage. The Act is a landmark piece of legislation proposed by the European Commission to regulate AI systems available in the EU market. It takes a risk-based approach to regulation, with systems classified as posing minimal, limited, high, or unacceptable levels of risk. The latest version aligns more closely with the OECD definition of AI and covers eight high-risk categories, including biometrics and biometric-based systems, management of critical infrastructure, and AI systems intended to be used for influencing elections. The Act also prohibits real-time remote biometric identification and places a focus on protecting EU citizens' rights and education.