July 2024
Europe is leading the way in regulating digital platforms with its trio of laws: The Digital Markets Act (DMA), Digital Services Act (DSA), and EU AI Act. The DMA and DSA are in effect and aim to ensure digital market competitiveness, fairness and prevent monopolies, while the EU AI Act imposes stringent obligations for risky AI systems. Gatekeepers who fail to comply with the DMA's rules risk hefty fines of up to 10% of their total worldwide annual turnover or up to 20% if they repeatedly offend. The European Commission has already initiated compliance investigations into designated gatekeepers Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Noncompliance investigations have been launched into Alphabet, Apple, and Meta over concerns that they are breaching the DMA's rules. Apple is reportedly blocking the release of Apple Intelligence in the EU due to concerns over DMA compliance.
The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.
The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.
Transparency is a key principle in frameworks designed to ensure safe and reliable development of AI. The EU AI Act emphasizes transparency requirements with a multi-pronged approach for different AI systems. High-risk AI systems have stringent transparency obligations and specific transparency obligations have been imposed on AI systems with certain functions such as direct interaction with individuals. The AI Act also imposes severe monetary penalties for non-compliance. Companies must prepare early for compliance with the AI Act.
The EU AI Act introduces a risk-based regulatory framework for AI governance and mandates conformity assessments for high-risk AI systems. Providers may choose between internal or external assessment, but external assessment is mandatory under certain conditions. Conformity assessments must be combined with other obligations, such as issuance of a certificate, declaration of conformity, CE marking, and registration in the EU database. If a high-risk AI system becomes non-compliant after marketing, corrective actions must be taken. Delegated acts may be introduced by the Commission for conformity assessments. Holistic AI can help enterprises adapt and comply with AI regulation.