April 2024

Navigating the Nexus: The EU's Cybersecurity Framework and AI Act in Concert

The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.

AI Red Flags: Navigating Prohibited Practices under the AI Act

The EU's Artificial Intelligence Act (AI Act) introduces a framework for categorizing AI systems as either low-risk, high-risk, or prohibited. The AI Act prohibits AI systems that violate human dignity, freedom, equality, and privacy. Eight key AI practices are prohibited by the EU AI Act, including those that involve subliminal, manipulative, or deceptive AI techniques, exploitative systems that significantly distort behavior, AI systems used for classification or scoring of people based on behavior or personality characteristics, predictive policing based solely on AI profiling, AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, and AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of an offender's global annual turnover. The rules on prohibited practices will be the first to start applying six months after the Act's publication in the Official Journal. The implications of using a prohibited AI system under the EU AI Act include hefty penalties.

March 2024

The EU AI Act and General Purpose AI Systems: What you need to know

The EU AI Act imposes distinct and stringent obligations on providers of general-purpose AI (GPAI) models due to their adaptability and potential systemic risks. GPAI models are defined by their broad functionality and ability to perform various tasks without domain-specific tuning. GPAI models with high-impact capabilities are designated as GPAI models with systemic risk (GPAISR) and subject to additional obligations for risk management and cybersecurity. The Act allows for exemptions for free and open licenses, while GPAISR providers can rely on codes of practice for compliance until harmonized EU standards are established. The rules on GPAI models are expected to become applicable 12 months after the enforcement of the Act.

EU AI Act approved by the European Parliament

The European Parliament has approved the EU AI Act, but it still needs approval from the Council of the European Union. The Act must undergo further scrutiny before becoming law and will be published in the Official Journal of the EU before becoming enforceable. The application of the Act's provisions will be phased, with some provisions likely to apply before the end of this year. Businesses should start preparing for the Act's enforcement.

How to Identify High-Risk AI Systems According to the EU AI Act

The EU AI Act is the first comprehensive legal framework governing AI use across different applications, with a risk-based approach for different AI systems. It includes entities based in the EU and organizations that employ AI in interactions with EU residents. AI systems are classified as prohibited, high-risk, or minimal risk, with general-purpose AI (GPAI) models subject to further assessment and different obligations. There are design-related requirements for high-risk AI systems, and transparency obligations for limited risk AI systems. Non-compliance with the Act carries significant penalties. It is crucial for organizations to determine their system's classification and establish a risk management framework to prepare for the Act.