How to Identify High-Risk AI Systems According to the EU AI Act

How to Identify High-Risk AI Systems According to the EU AI Act

Europe

The EU AI Act is the world’s first comprehensive legal framework governing AI across use cases. Following a lengthy consultation process since it was first proposed in April 2021 – which saw members states and Union institutions propose comprehensive amendments – a political agreement was reached in December of 2023. The text based on this agreement is now going through the final stages of the EU law-making procedure, and was approved by the EU Parliament committees on 13 February and will be voted on by the Parliament plenary in March. The text preserves the risk-based approach for AI systems, where requirements are proportionate to the level of risk posed by a system, while introducing another risk-based classification for general purpose AI (“GPAI”) models.

This guide serves as a starting point for organizations seeking to determine the level of regulatory risk their systems pose in the EU.

Continue reading on

Holistic

AI Tracker

Create your account

Create a FREE account and access a number of articles, resources and guidance information.

Already have an account? Log In