May 2024

Setting The Standards for AI: The EU AI Act’s Scheme for the Standardization of AI Systems

The EU AI Act introduces standardization instruments such as harmonized standards to facilitate compliance with the Act's requirements and obligations. Providers of high-risk AI systems and general-purpose AI models can enjoy a presumption of compliance if they follow these standardization tools. However, standardization is not mandatory, and providers who do not follow them may face additional workload and penalties for non-compliance. Harmonized standards are expected to cover the requirements for high-risk AI systems and the obligations of providers of GPAI models and GPAI models with systemic risk. Compliance with these standards can help bypass third-party conformity assessments for certain high-risk AI systems, but providers must still ensure compliance for requirements and obligations outside the scope of harmonized standards. The EU AI Act does not become fully operational until mid-2026, but market operators must prepare in advance to comply with the evolving regulatory framework around AI.

April 2024

Individuals at the Heart of the EU AI Act: Decoding the Fundamental Rights Impact Assessment

The EU AI Act introduces a requirement for deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) to identify potential threats to individuals’ fundamental rights and implement adequate responsive actions against such threats. FRIAs must be conducted before the deployment of HRAI systems, and must include essential details such as the risks of harm that could affect individuals or groups potentially affected by the system's use. The EU AI Act defines fundamental rights according to the Charter of Fundamental Rights of the European Union and introduces a new impact assessment called Fundamental Rights Impact Assessment (“FRIA”) for AI systems to further strengthen the protection of EU citizens’ fundamental rights. FRIAs must be revised if pertinent factors, such as risks of harm or usage frequency, undergo changes according to the deployer's evaluation.

Navigating the Nexus: The EU's Cybersecurity Framework and AI Act in Concert

The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.

AI Red Flags: Navigating Prohibited Practices under the AI Act

The EU's Artificial Intelligence Act (AI Act) introduces a framework for categorizing AI systems as either low-risk, high-risk, or prohibited. The AI Act prohibits AI systems that violate human dignity, freedom, equality, and privacy. Eight key AI practices are prohibited by the EU AI Act, including those that involve subliminal, manipulative, or deceptive AI techniques, exploitative systems that significantly distort behavior, AI systems used for classification or scoring of people based on behavior or personality characteristics, predictive policing based solely on AI profiling, AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, and AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings. Non-compliance with these prohibitions can result in significant administrative fines of up to €35,000,000 or up to 7% of an offender's global annual turnover. The rules on prohibited practices will be the first to start applying six months after the Act's publication in the Official Journal. The implications of using a prohibited AI system under the EU AI Act include hefty penalties.

March 2024

The EU AI Act and General Purpose AI Systems: What you need to know

The EU AI Act imposes distinct and stringent obligations on providers of general-purpose AI (GPAI) models due to their adaptability and potential systemic risks. GPAI models are defined by their broad functionality and ability to perform various tasks without domain-specific tuning. GPAI models with high-impact capabilities are designated as GPAI models with systemic risk (GPAISR) and subject to additional obligations for risk management and cybersecurity. The Act allows for exemptions for free and open licenses, while GPAISR providers can rely on codes of practice for compliance until harmonized EU standards are established. The rules on GPAI models are expected to become applicable 12 months after the enforcement of the Act.