July 2024
Competition authorities from the UK, US, and EU have published a joint statement outlining potential risks to fair competition that can emerge from generative AI and the principles needed to support competition and innovation while protecting consumers. These principles include fair dealing, interoperability, and choice, with a focus on informing consumers about when and how AI is used in products and services. Agencies in the US and UK are cracking down on AI risks and becoming increasingly vocal about the need to ensure AI complies with existing laws and does not harm consumers.
The EU AI Act aims to establish a trustworthy environment for AI within the EU market and employs a risk-based approach where obligations are proportionate to the risks posed by these systems. The Act outlines specific considerations for small and medium-sized enterprises (SMEs) and start-ups to support innovation among emerging players, such as giving them free access to regulatory sandboxes and being more lenient regarding documentation. The Act requires that conformity assessment fees be reduced in proportion to the size and market share of SMEs, and that guidelines and codes of conduct consider the interests and needs of SMEs. The Act also emphasizes the participation of SMEs in its governance structures and processes to ensure their views and interests are represented. Overall, compliance with the AI Act will be costly, but SMEs will benefit from initiatives and support to reduce financial burdens.
Europe is leading the way in regulating digital platforms with its trio of laws: The Digital Markets Act (DMA), Digital Services Act (DSA), and EU AI Act. The DMA and DSA are in effect and aim to ensure digital market competitiveness, fairness and prevent monopolies, while the EU AI Act imposes stringent obligations for risky AI systems. Gatekeepers who fail to comply with the DMA's rules risk hefty fines of up to 10% of their total worldwide annual turnover or up to 20% if they repeatedly offend. The European Commission has already initiated compliance investigations into designated gatekeepers Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Noncompliance investigations have been launched into Alphabet, Apple, and Meta over concerns that they are breaching the DMA's rules. Apple is reportedly blocking the release of Apple Intelligence in the EU due to concerns over DMA compliance.
The EU AI Act is not the first initiative to regulate AI, and in 2019, the Council of Europe established the Ad Hoc Committee on Artificial Intelligence, later known as the Committee on Artificial Intelligence (CAI). The CAI presented the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was officially adopted as the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Committee of Ministers of the CoE. The FCAI establishes a global legal framework for AI governance, outlining general principles and rules for regulating AI activities. It is intended to address the risks associated with AI systems, particularly concerning human rights, democracy, and the rule of law. The FCAI introduces seven fundamental principles that must be observed throughout the lifecycle of AI systems and mandates Contracting States to establish measures to identify, assess, prevent, and mitigate risks associated with AI systems. The FCAI will become effective after certain procedural steps are finalized, opening for signature on September 5th, 2024.
The EU AI Act, which categorizes AI systems into various risk levels and outlines specific requirements and obligations, has been published in the Official Journal of the EU. The Act's implementation will be phased, with provisions concerning prohibited practices taking effect six months after the Act's entry into force. Compliance preparation is emphasized, and Holistic AI offers assistance in preparing for the AI Act.