June 2024

The AI Pact: Establishing a Common Understanding Around The EU AI Act

The EU AI Act has been approved and will be phased in gradually. The European Commission has launched the AI Pact to encourage industry players to comply with the forthcoming AI Act ahead of schedule. The Pact offers a framework for collaboration, early adoption of regulations, and responsible AI practices. Participants will play a central role by committing to declarations of engagement and sharing their policies and best practices. The Pact will operate during the transition period until the enforcement of the EU AI Act and may continue to operate afterward. Compliance with the Act is necessary to avoid penalties and reputational damage. Holistic AI can help organizations comply with the EU AI Act safely and confidently.

May 2024

Navigating the Governance Architecture of the EU AI Act

The EU AI Act introduces a governance structure to ensure coordinated and effective implementation and enforcement of AI regulations at the national and Union levels. The governance framework includes four entities: the AI Office, AI Board, Advisory Forum, and Scientific Panel, each with distinct roles and responsibilities. The AI Office leads the implementation and enforcement of the Act, while the AI Board advises and assists in its consistent application across the EU. The Advisory Forum provides technical expertise and stakeholder input, and the Scientific Panel supports the Act's implementation with scientific insights and guidance. Experts selected for these entities must possess relevant competencies, independence, and scientific or technical expertise in the field of AI. Compliance with the EU AI Act is crucial, and early adoption of its principles can enable smoother compliance.

Setting The Standards for AI: The EU AI Act’s Scheme for the Standardization of AI Systems

The EU AI Act introduces standardization instruments such as harmonized standards to facilitate compliance with the Act's requirements and obligations. Providers of high-risk AI systems and general-purpose AI models can enjoy a presumption of compliance if they follow these standardization tools. However, standardization is not mandatory, and providers who do not follow them may face additional workload and penalties for non-compliance. Harmonized standards are expected to cover the requirements for high-risk AI systems and the obligations of providers of GPAI models and GPAI models with systemic risk. Compliance with these standards can help bypass third-party conformity assessments for certain high-risk AI systems, but providers must still ensure compliance for requirements and obligations outside the scope of harmonized standards. The EU AI Act does not become fully operational until mid-2026, but market operators must prepare in advance to comply with the evolving regulatory framework around AI.

April 2024

Individuals at the Heart of the EU AI Act: Decoding the Fundamental Rights Impact Assessment

The EU AI Act introduces a requirement for deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) to identify potential threats to individuals’ fundamental rights and implement adequate responsive actions against such threats. FRIAs must be conducted before the deployment of HRAI systems, and must include essential details such as the risks of harm that could affect individuals or groups potentially affected by the system's use. The EU AI Act defines fundamental rights according to the Charter of Fundamental Rights of the European Union and introduces a new impact assessment called Fundamental Rights Impact Assessment (“FRIA”) for AI systems to further strengthen the protection of EU citizens’ fundamental rights. FRIAs must be revised if pertinent factors, such as risks of harm or usage frequency, undergo changes according to the deployer's evaluation.

Navigating the Nexus: The EU's Cybersecurity Framework and AI Act in Concert

The increasing intertwining of artificial intelligence (AI) systems with digital networks has led to an increase in cyber threats to these systems. With cyberattacks projected to cost around EUR 9 trillion in 2024, the European Union's forthcoming Artificial Intelligence Act (EU AI Act) aims to fortify AI systems and models with solid cybersecurity measures. The EU AI Act imposes mandatory cybersecurity requirements on high-risk AI systems and general-purpose AI models with systemic risk. Certification of high-risk AI systems under the Cybersecurity Act's voluntary certification schemes may provide the presumption of conformity with the cybersecurity requirements of the EU AI Act, reducing duplication of costs. The EU AI Act also relates to other cybersecurity legislation such as the Cyber Resilience Act and the Cybersecurity Act, reinforcing the EU's cybersecurity framework. The EU AI Act's cybersecurity provisions mandate that high-risk AI systems demonstrate resilience against unauthorized attempts by third parties to manipulate their usage, outputs, or performance by exploiting vulnerabilities in the system. GPAI models with systemic risk are considered capable of triggering additional risks compared to basic GPAI models. Cybersecurity vulnerabilities in these models may increase additional risks or enhance the possibility of harmful consequences. Therefore, providers of GPAI models with systemic risk are obligated to provide an adequate level of cybersecurity protection for the model and its physical infrastructure.